Automation and individual skills.

SavannahMann

Platinum Member
Nov 16, 2016
14,540
6,817
365
We all remember the airplane crash in San Francisco where Asiana Airlines crashed into the Sea Wall short of the runway. Asiana Airlines Flight 214 - Wikipedia

Airplane automation was a major contributing factor to the accident, not just on the day of the accident, but years previously. Asiana required it’s pilots to let the computer fly the plane. The airline discouraged the “visual” approach for the pilots. It wasn’t quite outlawed by the company, but nearly so. The pilots had not flown a visual approach in years. Worse, they had been trained that the safety systems of the plane would not allow it to apply too little power, and while the plane was dropping dangerously low, and slow, the pilots sat calmly certain that the computer would apply the power needed to maintain the glide path.

This is one of a growing number of incidents where the pilots relied on automation, and when it failed, they were at a loss as to how to actually manage the emergency. They were behind the power curve from the beginning of the crisis, and did not have time to assess the situation and get the computer to do it’s job. The Pilots were more systems controller, managing the computer, than actually flying the plane.

There are other examples, Air France flight 447 which crashed into the Atlantic. Air France Flight 447 - Wikipedia

The Pilots were unprepared to fly the plane once the automation failed.

We also remember the story of Sully who managed a water landing in the Hudson, which is one of the very few successful emergency water landings in an aircraft in history. So why was Sully able to land the plane in a worst case scenario, when others could not in essentially routine situations?

Sully was an old school pilot. He had spent most of his life flying the planes he was in. He had years of experience with his hands on the controls, and had landed many thousands of times in less than ideal circumstances. He was flying long before the computers took over.

I said that the computers took over. That is literally the truth. Before the pilots could disengage the autopilot and fly it by hand. The controls they manipulated were directly connected to the control surfaces and engines. It was hydrolics, or cables, instead of computers. If the computers failed, the aircraft could and would fly based entirely upon the inputs of the pilot.

Now, even if the computers fail, all activity is through them. You move the stick to the left, and the computer decides if it wants the plane to do that. You pull the stick back, and the computers are thinking about letting the plane do it. And if the computer is wrong, there is no way for the pilot to override the damned thing.

Qantas Flight 72 - Wikipedia



Advocates of the automation will argue and point to statistics where the rate of accidents has decreased thanks to automation. I am not a big believer in it. Here’s why. When the computer is wrong, when the automation fails, you want a trained and experienced human in control. That trained and experienced pilot is something that takes time, and training to achieve. It takes years to get to the competence level that you want up front of that plane.

The more automation takes over, the longer this goes on, the less likely we are to find a Sully in front of the plane able to safely land it. The more likely we are to get pilots who are confused by the situation, and their last actions will be to try and get the computer doing the job again. We are losing pilots up front, and getting poorly trained computer systems technicians.

We may never again have a Chuck Yeager, or a Scott Carpenter who is able to identify the problem, and manually manage it for a landing. We may never again find ourselves with a Neil Armstrong who in Gemini 8 was able to save the capsule, and the mission, after a failure of one of the thrusters. Today’s pilots would probably die trying to get the computer to reboot and fly the damned capsule.

Automation is one of those things. It’s cool when it works, but it needs to have an override button that allows someone we’ve spent a lot of money training to take over and tell the computer to shut up and do what I say. We need those pilots to make visual approaches, and get the hours of hands on flying. We need those pilots to be trained, and have the experience we desperately need in those moments where disaster lurks near at hand.
 
We all remember the airplane crash in San Francisco where Asiana Airlines crashed into the Sea Wall short of the runway. Asiana Airlines Flight 214 - Wikipedia

Airplane automation was a major contributing factor to the accident, not just on the day of the accident, but years previously. Asiana required it’s pilots to let the computer fly the plane. The airline discouraged the “visual” approach for the pilots. It wasn’t quite outlawed by the company, but nearly so. The pilots had not flown a visual approach in years. Worse, they had been trained that the safety systems of the plane would not allow it to apply too little power, and while the plane was dropping dangerously low, and slow, the pilots sat calmly certain that the computer would apply the power needed to maintain the glide path.

This is one of a growing number of incidents where the pilots relied on automation, and when it failed, they were at a loss as to how to actually manage the emergency. They were behind the power curve from the beginning of the crisis, and did not have time to assess the situation and get the computer to do it’s job. The Pilots were more systems controller, managing the computer, than actually flying the plane.

There are other examples, Air France flight 447 which crashed into the Atlantic. Air France Flight 447 - Wikipedia

The Pilots were unprepared to fly the plane once the automation failed.

We also remember the story of Sully who managed a water landing in the Hudson, which is one of the very few successful emergency water landings in an aircraft in history. So why was Sully able to land the plane in a worst case scenario, when others could not in essentially routine situations?

Sully was an old school pilot. He had spent most of his life flying the planes he was in. He had years of experience with his hands on the controls, and had landed many thousands of times in less than ideal circumstances. He was flying long before the computers took over.

I said that the computers took over. That is literally the truth. Before the pilots could disengage the autopilot and fly it by hand. The controls they manipulated were directly connected to the control surfaces and engines. It was hydrolics, or cables, instead of computers. If the computers failed, the aircraft could and would fly based entirely upon the inputs of the pilot.

Now, even if the computers fail, all activity is through them. You move the stick to the left, and the computer decides if it wants the plane to do that. You pull the stick back, and the computers are thinking about letting the plane do it. And if the computer is wrong, there is no way for the pilot to override the damned thing.

Qantas Flight 72 - Wikipedia



Advocates of the automation will argue and point to statistics where the rate of accidents has decreased thanks to automation. I am not a big believer in it. Here’s why. When the computer is wrong, when the automation fails, you want a trained and experienced human in control. That trained and experienced pilot is something that takes time, and training to achieve. It takes years to get to the competence level that you want up front of that plane.

The more automation takes over, the longer this goes on, the less likely we are to find a Sully in front of the plane able to safely land it. The more likely we are to get pilots who are confused by the situation, and their last actions will be to try and get the computer doing the job again. We are losing pilots up front, and getting poorly trained computer systems technicians.

We may never again have a Chuck Yeager, or a Scott Carpenter who is able to identify the problem, and manually manage it for a landing. We may never again find ourselves with a Neil Armstrong who in Gemini 8 was able to save the capsule, and the mission, after a failure of one of the thrusters. Today’s pilots would probably die trying to get the computer to reboot and fly the damned capsule.

Automation is one of those things. It’s cool when it works, but it needs to have an override button that allows someone we’ve spent a lot of money training to take over and tell the computer to shut up and do what I say. We need those pilots to make visual approaches, and get the hours of hands on flying. We need those pilots to be trained, and have the experience we desperately need in those moments where disaster lurks near at hand.

The other side of the equation is that pilots like Sully are probably as good as human pilots will ever get. Automation will continue to improve with no limits.
 
  • Thread starter
  • Banned
  • #3
We all remember the airplane crash in San Francisco where Asiana Airlines crashed into the Sea Wall short of the runway. Asiana Airlines Flight 214 - Wikipedia

Airplane automation was a major contributing factor to the accident, not just on the day of the accident, but years previously. Asiana required it’s pilots to let the computer fly the plane. The airline discouraged the “visual” approach for the pilots. It wasn’t quite outlawed by the company, but nearly so. The pilots had not flown a visual approach in years. Worse, they had been trained that the safety systems of the plane would not allow it to apply too little power, and while the plane was dropping dangerously low, and slow, the pilots sat calmly certain that the computer would apply the power needed to maintain the glide path.

This is one of a growing number of incidents where the pilots relied on automation, and when it failed, they were at a loss as to how to actually manage the emergency. They were behind the power curve from the beginning of the crisis, and did not have time to assess the situation and get the computer to do it’s job. The Pilots were more systems controller, managing the computer, than actually flying the plane.

There are other examples, Air France flight 447 which crashed into the Atlantic. Air France Flight 447 - Wikipedia

The Pilots were unprepared to fly the plane once the automation failed.

We also remember the story of Sully who managed a water landing in the Hudson, which is one of the very few successful emergency water landings in an aircraft in history. So why was Sully able to land the plane in a worst case scenario, when others could not in essentially routine situations?

Sully was an old school pilot. He had spent most of his life flying the planes he was in. He had years of experience with his hands on the controls, and had landed many thousands of times in less than ideal circumstances. He was flying long before the computers took over.

I said that the computers took over. That is literally the truth. Before the pilots could disengage the autopilot and fly it by hand. The controls they manipulated were directly connected to the control surfaces and engines. It was hydrolics, or cables, instead of computers. If the computers failed, the aircraft could and would fly based entirely upon the inputs of the pilot.

Now, even if the computers fail, all activity is through them. You move the stick to the left, and the computer decides if it wants the plane to do that. You pull the stick back, and the computers are thinking about letting the plane do it. And if the computer is wrong, there is no way for the pilot to override the damned thing.

Qantas Flight 72 - Wikipedia



Advocates of the automation will argue and point to statistics where the rate of accidents has decreased thanks to automation. I am not a big believer in it. Here’s why. When the computer is wrong, when the automation fails, you want a trained and experienced human in control. That trained and experienced pilot is something that takes time, and training to achieve. It takes years to get to the competence level that you want up front of that plane.

The more automation takes over, the longer this goes on, the less likely we are to find a Sully in front of the plane able to safely land it. The more likely we are to get pilots who are confused by the situation, and their last actions will be to try and get the computer doing the job again. We are losing pilots up front, and getting poorly trained computer systems technicians.

We may never again have a Chuck Yeager, or a Scott Carpenter who is able to identify the problem, and manually manage it for a landing. We may never again find ourselves with a Neil Armstrong who in Gemini 8 was able to save the capsule, and the mission, after a failure of one of the thrusters. Today’s pilots would probably die trying to get the computer to reboot and fly the damned capsule.

Automation is one of those things. It’s cool when it works, but it needs to have an override button that allows someone we’ve spent a lot of money training to take over and tell the computer to shut up and do what I say. We need those pilots to make visual approaches, and get the hours of hands on flying. We need those pilots to be trained, and have the experience we desperately need in those moments where disaster lurks near at hand.

The other side of the equation is that pilots like Sully are probably as good as human pilots will ever get. Automation will continue to improve with no limits.


You say that, but.....

One of the reasons that Sully decided to put it into the water, despite the low probability of success, which is why it was called the Miracle on the Hudson, is collateral damage.

Putting the plane down on a road might have had a higher probability of success, but it also had a much higher probability of killing a lot of people on the ground. One of the things often heard on voice recorders during emergencies in the air is the pilots discussing where to aim the plane if they can’t save it. At night, they look for a dark area, because no lights generally means no people to kill on the ground.

Killing everyone on the plane is bad. Killing everyone on the plane, and just as many people on the ground is worse. Pilots know this, and they will try and minimize the collateral damage. Some of the worst accidents are ones where the plane strikes a building, and no I am not talking 9-11. El Al Flight 1862 - Wikipedia

The Pilots were trying to make the runway. Who can blame them for that? But if they had understood that there was literally no way for them to land, I think they would have set it down in the water, or crashed it into a deserted section of ground.

The computer on the other hand, has priorities. The first priority is the plane. Safety of the plane is paramount. That priority is what happened with the Quantas jet that got me on this subject. The plane argued with the pilot, refusing the manual commands. The computer ignored the pilot to focus on the safety of the plane, as the computer understood it.

Auto driving cars. Those are almost reality. But at some point, the car is going to be headed into an accident. A truck is going to block the road, and the computer will have a fraction of a second to decide what to do. If the computers priority is to avoid the accident, it will try and swerve, up onto a sidewalk, to save the car. It won’t think about the kids standing right there, or the little old lady leaning on her walker. Those are obstacles to be avoided, if possible.

One of the reasons the Apollo missions succeeded is Hal Landing, a computer scientist at MIT, came up with the idea of priorities for the computer. The program alarms during Apollo 11 were the computer doing what it was supposed to. It was dumping lower priority jobs for more important jobs. The computer was following it’s program. But in the end, the Astronauts were in control, directing the craft to do what they wanted.

The computers should be an aide to the humans. The human should be the one in control. With the computers treating human actions as suggestions, the pilot is not in control, the computer is.

If the Computer automation had been directing the plane that Sully set down in the water, in a maneuver with very low probability of success, the computer would not have chosen that action. It would have gone for the higher probability action, the attempt to return to the runway, and the subsequent crash in a neighborhood, or the attempt at an emergency landing on a road, and the resulting mass casualty event.

Yes, humans make mistakes. Yes we screw things up daily. Lord knows I screw up often enough that I sometimes think I have enough bad luck for two. But we also have skills to minimize the severity of the accident.

True Story. I was an over the road truck driver. I was driving on a divided highway, an interstate. There was no barrier between the roads, just a gras strip. A truck coming the other way was not paying enough attention, and nearly ran into a traffic slow down. The road on my side was clear, no traffic. That truck avoided the accident and swerved across the grass strip, and into oncoming traffic, right in front of me.

I was going to have an accident, it was unavoidable. There was not enough distance to stop. I could swerve left, into the oncoming traffic. I could continue forward and run into the truck now stopped on the road, and blocking both shoulders. I could swerve right and drive up an embankment of about twenty to thirty degrees of grassy slope.

I swerved right. Yes, I would probably lay the truck down on it’s side, but I would avoid driving into either oncoming traffic, or a stationary object of essentially equal mass. It was the least worst choice. The truck I was driving missed the stopped truck by inches, and I started up the slope. Now, I’ve avoided one accident, and I instinctively realized that stomping on the brakes was a bad choice. I stood on the accelerator with the truck bouncing like hell over the uneven ground. I steered the truck to the left, feeling the teetering desire of the truck to lay over, going with it, trying to surf the truck along the slope to the road.

It was a maneuver well above my skill level. I got very lucky, and I managed to pull it off. It was the best/worst day of trucking in my experience. I knew it was luck as much as anything else. But it worked because I kept choosing the least worst option.

A computer would have calculated the odds, and decided that swerving left was a higher probability of success, the ground there was flat. Flat ground is what trucks need, not severe slopes. It would have caused a head on collision with severe injuries, or deaths.

The trucker I had just missed called me on the CB and said that was the best driving he had ever seen. I said thanks, and told him I was fine when he asked. He got his truck out of the road, and headed down the way he had come. No bent metal, no injuries, and no fatalities. It was a good day.

I sat on the side of the road with the hazards going for a few minutes while my heart rate stabilized, and my cigarette went up in ash from me inhaling it.

A human chooses to minimize the accident. A computer chooses the highest probability for itself. I could have dumped the truck, probably should have, on it’s side. Driving on a slope like that is a recipe for disaster. But if it did dump, then nobody but me was going to get hurt, or dead.

That is what the pilot is thinking when he asks the copilot if there are any dark spots out there as they are fighting for control of the airplane. The pilot, if trained, and experienced, will find a way if there is one.



It happens, and letting the computer fly the plane, the odds might work in your favor, but probability is a harsh bastard when it doesn’t work out in your favor.
 
We all remember the airplane crash in San Francisco where Asiana Airlines crashed into the Sea Wall short of the runway. Asiana Airlines Flight 214 - Wikipedia

Airplane automation was a major contributing factor to the accident, not just on the day of the accident, but years previously. Asiana required it’s pilots to let the computer fly the plane. The airline discouraged the “visual” approach for the pilots. It wasn’t quite outlawed by the company, but nearly so. The pilots had not flown a visual approach in years. Worse, they had been trained that the safety systems of the plane would not allow it to apply too little power, and while the plane was dropping dangerously low, and slow, the pilots sat calmly certain that the computer would apply the power needed to maintain the glide path.

This is one of a growing number of incidents where the pilots relied on automation, and when it failed, they were at a loss as to how to actually manage the emergency. They were behind the power curve from the beginning of the crisis, and did not have time to assess the situation and get the computer to do it’s job. The Pilots were more systems controller, managing the computer, than actually flying the plane.

There are other examples, Air France flight 447 which crashed into the Atlantic. Air France Flight 447 - Wikipedia

The Pilots were unprepared to fly the plane once the automation failed.

We also remember the story of Sully who managed a water landing in the Hudson, which is one of the very few successful emergency water landings in an aircraft in history. So why was Sully able to land the plane in a worst case scenario, when others could not in essentially routine situations?

Sully was an old school pilot. He had spent most of his life flying the planes he was in. He had years of experience with his hands on the controls, and had landed many thousands of times in less than ideal circumstances. He was flying long before the computers took over.

I said that the computers took over. That is literally the truth. Before the pilots could disengage the autopilot and fly it by hand. The controls they manipulated were directly connected to the control surfaces and engines. It was hydrolics, or cables, instead of computers. If the computers failed, the aircraft could and would fly based entirely upon the inputs of the pilot.

Now, even if the computers fail, all activity is through them. You move the stick to the left, and the computer decides if it wants the plane to do that. You pull the stick back, and the computers are thinking about letting the plane do it. And if the computer is wrong, there is no way for the pilot to override the damned thing.

Qantas Flight 72 - Wikipedia



Advocates of the automation will argue and point to statistics where the rate of accidents has decreased thanks to automation. I am not a big believer in it. Here’s why. When the computer is wrong, when the automation fails, you want a trained and experienced human in control. That trained and experienced pilot is something that takes time, and training to achieve. It takes years to get to the competence level that you want up front of that plane.

The more automation takes over, the longer this goes on, the less likely we are to find a Sully in front of the plane able to safely land it. The more likely we are to get pilots who are confused by the situation, and their last actions will be to try and get the computer doing the job again. We are losing pilots up front, and getting poorly trained computer systems technicians.

We may never again have a Chuck Yeager, or a Scott Carpenter who is able to identify the problem, and manually manage it for a landing. We may never again find ourselves with a Neil Armstrong who in Gemini 8 was able to save the capsule, and the mission, after a failure of one of the thrusters. Today’s pilots would probably die trying to get the computer to reboot and fly the damned capsule.

Automation is one of those things. It’s cool when it works, but it needs to have an override button that allows someone we’ve spent a lot of money training to take over and tell the computer to shut up and do what I say. We need those pilots to make visual approaches, and get the hours of hands on flying. We need those pilots to be trained, and have the experience we desperately need in those moments where disaster lurks near at hand.

The other side of the equation is that pilots like Sully are probably as good as human pilots will ever get. Automation will continue to improve with no limits.


You say that, but.....

One of the reasons that Sully decided to put it into the water, despite the low probability of success, which is why it was called the Miracle on the Hudson, is collateral damage.

Putting the plane down on a road might have had a higher probability of success, but it also had a much higher probability of killing a lot of people on the ground. One of the things often heard on voice recorders during emergencies in the air is the pilots discussing where to aim the plane if they can’t save it. At night, they look for a dark area, because no lights generally means no people to kill on the ground.

Killing everyone on the plane is bad. Killing everyone on the plane, and just as many people on the ground is worse. Pilots know this, and they will try and minimize the collateral damage. Some of the worst accidents are ones where the plane strikes a building, and no I am not talking 9-11. El Al Flight 1862 - Wikipedia

The Pilots were trying to make the runway. Who can blame them for that? But if they had understood that there was literally no way for them to land, I think they would have set it down in the water, or crashed it into a deserted section of ground.

The computer on the other hand, has priorities. The first priority is the plane. Safety of the plane is paramount. That priority is what happened with the Quantas jet that got me on this subject. The plane argued with the pilot, refusing the manual commands. The computer ignored the pilot to focus on the safety of the plane, as the computer understood it.

Auto driving cars. Those are almost reality. But at some point, the car is going to be headed into an accident. A truck is going to block the road, and the computer will have a fraction of a second to decide what to do. If the computers priority is to avoid the accident, it will try and swerve, up onto a sidewalk, to save the car. It won’t think about the kids standing right there, or the little old lady leaning on her walker. Those are obstacles to be avoided, if possible.

One of the reasons the Apollo missions succeeded is Hal Landing, a computer scientist at MIT, came up with the idea of priorities for the computer. The program alarms during Apollo 11 were the computer doing what it was supposed to. It was dumping lower priority jobs for more important jobs. The computer was following it’s program. But in the end, the Astronauts were in control, directing the craft to do what they wanted.

The computers should be an aide to the humans. The human should be the one in control. With the computers treating human actions as suggestions, the pilot is not in control, the computer is.

If the Computer automation had been directing the plane that Sully set down in the water, in a maneuver with very low probability of success, the computer would not have chosen that action. It would have gone for the higher probability action, the attempt to return to the runway, and the subsequent crash in a neighborhood, or the attempt at an emergency landing on a road, and the resulting mass casualty event.

Yes, humans make mistakes. Yes we screw things up daily. Lord knows I screw up often enough that I sometimes think I have enough bad luck for two. But we also have skills to minimize the severity of the accident.

True Story. I was an over the road truck driver. I was driving on a divided highway, an interstate. There was no barrier between the roads, just a gras strip. A truck coming the other way was not paying enough attention, and nearly ran into a traffic slow down. The road on my side was clear, no traffic. That truck avoided the accident and swerved across the grass strip, and into oncoming traffic, right in front of me.

I was going to have an accident, it was unavoidable. There was not enough distance to stop. I could swerve left, into the oncoming traffic. I could continue forward and run into the truck now stopped on the road, and blocking both shoulders. I could swerve right and drive up an embankment of about twenty to thirty degrees of grassy slope.

I swerved right. Yes, I would probably lay the truck down on it’s side, but I would avoid driving into either oncoming traffic, or a stationary object of essentially equal mass. It was the least worst choice. The truck I was driving missed the stopped truck by inches, and I started up the slope. Now, I’ve avoided one accident, and I instinctively realized that stomping on the brakes was a bad choice. I stood on the accelerator with the truck bouncing like hell over the uneven ground. I steered the truck to the left, feeling the teetering desire of the truck to lay over, going with it, trying to surf the truck along the slope to the road.

It was a maneuver well above my skill level. I got very lucky, and I managed to pull it off. It was the best/worst day of trucking in my experience. I knew it was luck as much as anything else. But it worked because I kept choosing the least worst option.

A computer would have calculated the odds, and decided that swerving left was a higher probability of success, the ground there was flat. Flat ground is what trucks need, not severe slopes. It would have caused a head on collision with severe injuries, or deaths.

The trucker I had just missed called me on the CB and said that was the best driving he had ever seen. I said thanks, and told him I was fine when he asked. He got his truck out of the road, and headed down the way he had come. No bent metal, no injuries, and no fatalities. It was a good day.

I sat on the side of the road with the hazards going for a few minutes while my heart rate stabilized, and my cigarette went up in ash from me inhaling it.

A human chooses to minimize the accident. A computer chooses the highest probability for itself. I could have dumped the truck, probably should have, on it’s side. Driving on a slope like that is a recipe for disaster. But if it did dump, then nobody but me was going to get hurt, or dead.

That is what the pilot is thinking when he asks the copilot if there are any dark spots out there as they are fighting for control of the airplane. The pilot, if trained, and experienced, will find a way if there is one.



It happens, and letting the computer fly the plane, the odds might work in your favor, but probability is a harsh bastard when it doesn’t work out in your favor.

Great story but saying a human's priorities are different from a computer's is silly. The computer can be programmed with the same priorities and if there is a split second to make a decision that is rough on a human but gets easier for computers every generation as they get more powerful. In your close call there was great driving on your part but human error on the part of the other driver. If a computer was driving the other truck there would have been no distractions so it would have likely been able to slow or stop and avoid the problem. Further, it would most likely be in (radio/wireless/bluetooth?) contact with the vehicles around it and would anticipate the congestion. It would also know if other vehicles were in it's vicinity at all times. Also it would not speed or be in a hurry.
 

Forum List

Back
Top