Boeing pilot involved in Max testing is indicted in Texas
Competency is a box ticking exercise as per the regulators requirements. Proficiency is what is missing.
I'm an unabashed dinosaur. Yes, the pilot does need to be on top of the attractive and eye-catching bells and whistles but when things get too far out of the routine perceptions of line pilot reality, unless the pilot is able to do the stick and rudder stuff adequately, all might be lost very quickly.
Wasn't MCAS created and put in the background to provide exactly the stick and rudder feel of the old days while masking the fact that new, heavier, more powerful engines had been moved far forward of the cg changing the physical flight behaviour beyond the expected? Are today's aircraft still "honest" about their behavior? We even have artificial stick feel. We are far down the slippery slope already aren't we?
Join Date: Aug 2008
Location: Ireland
Posts: 100
Likes: 0
Received 0 Likes
on
0 Posts
Wasn't MCAS created and put in the background to provide exactly the stick and rudder feel of the old days while masking the fact that new, heavier, more powerful engines had been moved far forward of the cg changing the physical flight behaviour beyond the expected? Are today's aircraft still "honest" about their behavior? We even have artificial stick feel. We are far down the slippery slope already aren't we?
Join Date: Jul 2013
Location: Within AM radio broadcast range of downtown Chicago
Age: 71
Posts: 856
Received 0 Likes
on
0 Posts
National Academy of Sciences report
Reuters reporting on release (as of Wednesday 22 June) of National Academy of Sciences report on the FAA Transport Airplane Risk Assessment Methodology - TARAM - which report was ordered by the Congress as part of its legislative response to the two 737 MAX accidents.
https://www.reuters.com/business/aer...es-2022-06-22/
Source for the report:
http://nap.naptionalacademies.org/26519
With a nomination for FAA Administrator pending before the Senate, and news reports to some extent favoring the nominee precisely because he is not from within the FAA domain other than a brief stint in airport administration, progress or not-progress for confirmation could get very interesting. Or, does lacking extensive or meaningful experience and familiarity with FAA's administrative functions really constitute a "plus" for solving the agency's recently revealed issues and problems?
https://www.reuters.com/business/aer...es-2022-06-22/
Source for the report:
http://nap.naptionalacademies.org/26519
With a nomination for FAA Administrator pending before the Senate, and news reports to some extent favoring the nominee precisely because he is not from within the FAA domain other than a brief stint in airport administration, progress or not-progress for confirmation could get very interesting. Or, does lacking extensive or meaningful experience and familiarity with FAA's administrative functions really constitute a "plus" for solving the agency's recently revealed issues and problems?
Last edited by WillowRun 6-3; 23rd Jun 2022 at 15:11.
IMHO, and based on decades of experience in aircraft certification, I think putting an outsider - with no experience with aircraft cert - in charge of the FAA is about the dumbest thing they could do.
If a plane crashed due to pilot error, would any sane person think the solution would be to put a non-pilot in the pilot's seat?
If a plane crashed due to pilot error, would any sane person think the solution would be to put a non-pilot in the pilot's seat?
I am amazed that you are amazed. By design there is no accountability in the C suite. A few law makers have tried to remedy this over the years but they were all quickly crushed by the massive industrial lobby infrastructure designed to insulate the bosses from the consequences of their decisions.
With a nomination for FAA Administrator pending before the Senate, and news reports to some extent favoring the nominee precisely because he is not from within the FAA domain other than a brief stint in airport administration, progress or not-progress for confirmation could get very interesting. Or, does lacking extensive or meaningful experience and familiarity with FAA's administrative functions really constitute a "plus" for solving the agency's recently revealed issues and problems?
The profound lack of understanding as to the value of engineering excellence lead to the company culture that inevitably lead to the smoking hole. A BOD that didn’t know what they didn’t know directly lead to the appointment of a C suite that incentivized all the wrong behaviours
That ended so well the obvious solution is to ignore what happened and appoint a FAA administrator with no professional aviation qualifications or significant relevant aviation experience so the cycle of management “success” can go on.
Re the NAS report - post #186
“This report responds to the statement of task specified in the Aircraft Certification, Safety, and Accountability Act. It must be noted at the outset that this report does not assess the application of the TARAM process to any specific incidents or accidents, including the 737 MAX accidents. While the committee was provided a copy of the 737 MAX TARAM analysis provided by the FAA to Congress in late 2019, FAA management declined to provide additional details or to discuss the TARAM analysis of the 737 MAX with the committee. The committee, therefore, was unable to comment on the 737 MAX TARAM analysis. Regardless, the committee was able to make recommendations that, if adopted, would significantly improve the TARAM process.” ( my emphasis )
However, the recommendations to the FAA to review their process addresses several issues which are suspected to have been factors in the 737 Max certification; i.e. - recommendations:
Designate new FAA expertise; difficult if this was lacking, no overnight experts.
Define the type of data to be monitored; ‘definitions’ tend to restrict, and ‘monitor’ implies after the fact, not aiding initial type certification.
A significant issue is to quantify human performance. The Max certification assumed (or was persuaded to assume) that the pilot would be capable of managing an abnormal situation (without guidance and training, etc).
Conceptually this is a continuing issue in all certifications which require expert judgement (normally manufacturer); it challenges the ideas of inservice HF expertise, and if the required performance (the judged level of safety) can ever be quantified.
The report continues with recommendations on computation and analysis, implying that human attributes and uncertainties in operation can be expressed numerically.
Probabilistic risk assessment is increasingly difficult in safe industries because of reducing safety data, thus depending more on subjective assessments; computers lack subjectivity (other than that given by humans)
And so on …; tasks and challenges which the FAA, or any other authority might not achieve.
The meaningful issues involve the need to look at safety differently, not rewrite the book (it has got us this far), but to use alternative views as additions and enhancements; considering the human as an asset, error is normal, considering the wider system, etc.
This could become an industrywide issue particularly if other regulators use the NAS report or revised FAA guidance because of the salience of the 737 Max failure, opposed to thinking much deeper, wider, about the uncertainties in modern operations involving human activity at all levels; Congress, NAS, FAA, where individuals’ personal qualities might only be a small part of the need to think about safety differently.
“This report responds to the statement of task specified in the Aircraft Certification, Safety, and Accountability Act. It must be noted at the outset that this report does not assess the application of the TARAM process to any specific incidents or accidents, including the 737 MAX accidents. While the committee was provided a copy of the 737 MAX TARAM analysis provided by the FAA to Congress in late 2019, FAA management declined to provide additional details or to discuss the TARAM analysis of the 737 MAX with the committee. The committee, therefore, was unable to comment on the 737 MAX TARAM analysis. Regardless, the committee was able to make recommendations that, if adopted, would significantly improve the TARAM process.” ( my emphasis )
However, the recommendations to the FAA to review their process addresses several issues which are suspected to have been factors in the 737 Max certification; i.e. - recommendations:
Designate new FAA expertise; difficult if this was lacking, no overnight experts.
Define the type of data to be monitored; ‘definitions’ tend to restrict, and ‘monitor’ implies after the fact, not aiding initial type certification.
A significant issue is to quantify human performance. The Max certification assumed (or was persuaded to assume) that the pilot would be capable of managing an abnormal situation (without guidance and training, etc).
Conceptually this is a continuing issue in all certifications which require expert judgement (normally manufacturer); it challenges the ideas of inservice HF expertise, and if the required performance (the judged level of safety) can ever be quantified.
The report continues with recommendations on computation and analysis, implying that human attributes and uncertainties in operation can be expressed numerically.
Probabilistic risk assessment is increasingly difficult in safe industries because of reducing safety data, thus depending more on subjective assessments; computers lack subjectivity (other than that given by humans)
And so on …; tasks and challenges which the FAA, or any other authority might not achieve.
The meaningful issues involve the need to look at safety differently, not rewrite the book (it has got us this far), but to use alternative views as additions and enhancements; considering the human as an asset, error is normal, considering the wider system, etc.
This could become an industrywide issue particularly if other regulators use the NAS report or revised FAA guidance because of the salience of the 737 Max failure, opposed to thinking much deeper, wider, about the uncertainties in modern operations involving human activity at all levels; Congress, NAS, FAA, where individuals’ personal qualities might only be a small part of the need to think about safety differently.
Moderator
A significant issue is to quantify human performance. The Max certification assumed (or was persuaded to assume) that the pilot would be capable of managing an abnormal situation (without guidance and training, etc).
Conceptually this is a continuing issue in all certifications which require expert judgement (normally manufacturer); it challenges the ideas of inservice HF expertise, and if the required performance (the judged level of safety) can ever be quantified.
Conceptually this is a continuing issue in all certifications which require expert judgement (normally manufacturer); it challenges the ideas of inservice HF expertise, and if the required performance (the judged level of safety) can ever be quantified.
FAR Part 25.101:
........(1) Be able to be consistently executed in service by crews of average skill;......
Join Date: May 2000
Location: SV Marie Celeste
Posts: 655
Likes: 0
Received 0 Likes
on
0 Posts
Isn't 'part of the issue here that the "average skill" is something that has been going down for years. This has been driven mostly by airlines offloading training costs to new pilots. This has made the pool of available pilots self selecting on the basis of ability to pay rather than talent and ability.
It's a bit subjective. But, the design requirement:
FAR Part 25.101:
Does specifically exclude above standard pilot skill and attention for in service flying. Yes, it can be a bit of a variable to have a very practiced company test pilot limit the application of their skill to "average", but, if in doubt, the test pilot confers with the authority in advance of a finding of design compliance.
FAR Part 25.101:
Originally Posted by FAR Part 25.101
........(1) Be able to be consistently executed in service by crews of average skill;......
I am more worried about these types of design requirements. Nowadays, in new drafts, there is no longer room for such a "non-specification" type of specification.
Bottom line, such a vague specification would imply, the specified design item is allowed to fail on 50% of the "crews". Granted, it depends a bit on the way you define "average" as well, the average level fluctuating "during the day", however it still implies a huge amount of failures is allowed, according to the specification. Add enough of these specifications and the Swiss cheese holes start lining up.
And, having these vague specifications, it widely opens the door to specification (interpretation) manipulation by management, as we have seen with the MAX.
25.101; I am familiar with this; it relates to initial certification.
NAS refers to the FAA process for calculating risks associated with continued-operational-safety (COS), used for inservice aircraft; this is based on the Transport Airplane Risk Assessment Methodology (TARAM)*.
Thus any relationship with the 737 Max involves activities after the first accident, where the FAA was adamant that their safety model did not justify grounding the Max.
NAS concentrates on the weaknesses in the safety model (computer analysis) and it’s ‘expert’ use (human / computer decision vs judgement). The model used quantified risk ‘data’ based on a numerical certification assessment (weak subjective analysis of human performance), and comparison with similar incidents across the lifetime of the original aircraft type.
Certification depends on knowledge of the system (MCAS unknown or poorly understood by FAA). Malfunction recovery depended on crew action (assumed same as trim runaway), which requires a qualitative assessment to assess the situation recognition and timely action (uncertain human behaviour).
After an inservice event, this data would be considered against the lifetime history of all 737 variants.
However, if the 737 Max safety risk was modelled without MCAS (most likely), then the first Max accident could be mis-designated as a rare ‘trim failure’; compared with a lengthy aircraft history without previous trim related accidents (all variants), and that mitigation required timely crew action (which suited Boeing’s approach - blame the crew / operator).
(and don't forget the old thread on ‘rollercoaster’ manoeuvre for trim failure - assumed crew recognition and action)
NAS identifies generic safety errors in modelling, which with deduction suggests that MCAS should have been designated as a unique new system, such that the first accident would have stood out as a ‘first’ early in the lifetime of a ‘new’ aircraft.
The FAA’s false belief in 737 Max continued safety may have been strengthened by the ‘recovered incident’ before the second accident; where although the crew misdiagnosed the MCAS failure (insufficient knowledge / training), they fortunately choose the correct action, which the FAA took as vindication of their (false) understanding and public position.
After the second accident other regulatory authorities appear to have suspected errors in the airworthiness analysis and choose to re-evaluate both this and the FAA’s original certification.
This is a valuable lesson for future common certifications and safety modelling - questioning how to model crew activity, ‘average’ or otherwise, and how this might be represented numerically for computation.
‘Average’ in this sense is an inappropriate concept; also there is significant risk in ‘digitising’ human activity, both input judgement and biased output application.
* ANM 100 TARAM https://www.faa.gov/regulations_poli...C-06222015.pdf
https://rgl.faa.gov/Regulatory_and_Guidance_Library/rgPolicy.nsf/0/4e5ae8707164674a862579510061f96b/$FILE/PS-ANM-25-05%20TARAM%20Handbook.pdf
(cut and paste)
NAS Report (link broken?)
https://nap.nationalacademies.org/ca...ecord_id=26519
https://www.nationalacademies.org/ou...nt-methodology
,
NAS refers to the FAA process for calculating risks associated with continued-operational-safety (COS), used for inservice aircraft; this is based on the Transport Airplane Risk Assessment Methodology (TARAM)*.
Thus any relationship with the 737 Max involves activities after the first accident, where the FAA was adamant that their safety model did not justify grounding the Max.
NAS concentrates on the weaknesses in the safety model (computer analysis) and it’s ‘expert’ use (human / computer decision vs judgement). The model used quantified risk ‘data’ based on a numerical certification assessment (weak subjective analysis of human performance), and comparison with similar incidents across the lifetime of the original aircraft type.
Certification depends on knowledge of the system (MCAS unknown or poorly understood by FAA). Malfunction recovery depended on crew action (assumed same as trim runaway), which requires a qualitative assessment to assess the situation recognition and timely action (uncertain human behaviour).
After an inservice event, this data would be considered against the lifetime history of all 737 variants.
However, if the 737 Max safety risk was modelled without MCAS (most likely), then the first Max accident could be mis-designated as a rare ‘trim failure’; compared with a lengthy aircraft history without previous trim related accidents (all variants), and that mitigation required timely crew action (which suited Boeing’s approach - blame the crew / operator).
(and don't forget the old thread on ‘rollercoaster’ manoeuvre for trim failure - assumed crew recognition and action)
NAS identifies generic safety errors in modelling, which with deduction suggests that MCAS should have been designated as a unique new system, such that the first accident would have stood out as a ‘first’ early in the lifetime of a ‘new’ aircraft.
The FAA’s false belief in 737 Max continued safety may have been strengthened by the ‘recovered incident’ before the second accident; where although the crew misdiagnosed the MCAS failure (insufficient knowledge / training), they fortunately choose the correct action, which the FAA took as vindication of their (false) understanding and public position.
After the second accident other regulatory authorities appear to have suspected errors in the airworthiness analysis and choose to re-evaluate both this and the FAA’s original certification.
This is a valuable lesson for future common certifications and safety modelling - questioning how to model crew activity, ‘average’ or otherwise, and how this might be represented numerically for computation.
‘Average’ in this sense is an inappropriate concept; also there is significant risk in ‘digitising’ human activity, both input judgement and biased output application.
* ANM 100 TARAM https://www.faa.gov/regulations_poli...C-06222015.pdf
https://rgl.faa.gov/Regulatory_and_Guidance_Library/rgPolicy.nsf/0/4e5ae8707164674a862579510061f96b/$FILE/PS-ANM-25-05%20TARAM%20Handbook.pdf
(cut and paste)
NAS Report (link broken?)
https://nap.nationalacademies.org/ca...ecord_id=26519
https://www.nationalacademies.org/ou...nt-methodology
,
Last edited by safetypee; 26th Jun 2022 at 11:05.
@safetypee
Yep, it is all true what you write, though, the moment you can use a ruler to check compliance with a certification aspect, it's much more difficult to cheat on that. Of course, the FAR 25.101 wasn't the MAX crash reason, though the fact, the historic certification criteria are full of these vague definitions is the jackpot for management to reason out of compliance checking. As long as those deciding are operating in honesty, it's not a problem. The moment cheating is requested, it has become too easy to do so, since it's all "interpretation".
Yep, it is all true what you write, though, the moment you can use a ruler to check compliance with a certification aspect, it's much more difficult to cheat on that. Of course, the FAR 25.101 wasn't the MAX crash reason, though the fact, the historic certification criteria are full of these vague definitions is the jackpot for management to reason out of compliance checking. As long as those deciding are operating in honesty, it's not a problem. The moment cheating is requested, it has become too easy to do so, since it's all "interpretation".
Join Date: Dec 2002
Location: long island
Posts: 316
Likes: 0
Received 0 Likes
on
0 Posts
One of the more memorable parts described Boeing lawyers descending into the air terminal in Indonesia to secure releases from the relatives of that crash. They were accompanied by Indonesian police.
Join Date: Oct 2008
Location: Australia
Posts: 90
Likes: 0
Received 0 Likes
on
0 Posts
Bottom line, such a vague specification would imply, the specified design item is allowed to fail on 50% of the "crews". Granted, it depends a bit on the way you define "average" as well, the average level fluctuating "during the day", however it still implies a huge amount of failures is allowed, according to the specification. Add enough of these specifications and the Swiss cheese holes start lining up.
And, having these vague specifications, it widely opens the door to specification (interpretation) manipulation by management, as we have seen with the MAX.
And, having these vague specifications, it widely opens the door to specification (interpretation) manipulation by management, as we have seen with the MAX.
To your point about the vagueness of the "average pilot" specification, attempting to more tightly define this at the legislative level would be a herculean task I would have thought. To date we have relied on the industry ensuring, as much as possible, that "below-average" pilots do not progress into flying positions that require "average" skills. While this sounds subjective, in my experience (as a military instructor pilot and senior international airline captain) I am very comfortable with the way "averageness" was tested non-subjectively. No doubt this is imperfect and boundaries will always be tested by the unscrupulous, but I wish the best of luck in the world to anyone who would try and codify in a central document exactly what the skills of an average pilot should look like. We should never let up on the unscrupulous 'tho.
Join Date: Mar 2015
Location: antipodies
Posts: 75
Likes: 0
Received 0 Likes
on
0 Posts
To be thoroughly depressed, check out book called "Flying Blind" by Peter Robison. It seems very well researched and non-impassioned.
One of the more memorable parts described Boeing lawyers descending into the air terminal in Indonesia to secure releases from the relatives of that crash. They were accompanied by Indonesian police.
One of the more memorable parts described Boeing lawyers descending into the air terminal in Indonesia to secure releases from the relatives of that crash. They were accompanied by Indonesian police.
BPF & TDR make logical cases for competent management of the FAA; it needs competency in at least 2 major disciplines, engineering/risk management, and operations/risk management. Both of these areas seem to have missed the plot in recent times, and the results have been disastrous. The industry has had poor outcomes in the certification of the planes, and we fail too frequently in the training of crews. Along with that, there is a shortage of maintenance engineers as the job has become less inviting, and the depth of expertise in overhaul facilities is hurting. The industry has issues...
At some point, these issues need to be addressed, sooner preferably than later. To achieve all needs, restructuring of the FAA senior organogram may be a fair option, to get TAD/ACOs and FS/FSDOs working towards a more effective system. In both cases, the industry itself needs to provide a voice on what they need, and what they don't consider to be helpful in the current structure and regulatory system that has developed since Wilbur and Orville decided to make a small fortune out of a big one. ICAO is also due a score card on how the bureaucratizing of aviation is coming along, they are not always the solution to the industry's needs. Had our regulators existed in 1903, the Gulf War I would have ended with a no-ride zone, not a no-fly.
I don't think the inclusion of "average" here implies any such thing about the failure "rate". While the acceptable rate of failure may well, and definitely should, be specified elsewhere in the legislation, in this case surely this is merely specifying that procedures to cope with the failure of any component must not depend upon techniques that the "average" pilot might not be able to master within the course of their career development and specific training to be approved to fly the aircraft in question.
To your point about the vagueness of the "average pilot" specification, attempting to more tightly define this at the legislative level would be a herculean task I would have thought. To date we have relied on the industry ensuring, as much as possible, that "below-average" pilots do not progress into flying positions that require "average" skills. While this sounds subjective, in my experience (as a military instructor pilot and senior international airline captain) I am very comfortable with the way "averageness" was tested non-subjectively. No doubt this is imperfect and boundaries will always be tested by the unscrupulous, but I wish the best of luck in the world to anyone who would try and codify in a central document exactly what the skills of an average pilot should look like. We should never let up on the unscrupulous 'tho.
To your point about the vagueness of the "average pilot" specification, attempting to more tightly define this at the legislative level would be a herculean task I would have thought. To date we have relied on the industry ensuring, as much as possible, that "below-average" pilots do not progress into flying positions that require "average" skills. While this sounds subjective, in my experience (as a military instructor pilot and senior international airline captain) I am very comfortable with the way "averageness" was tested non-subjectively. No doubt this is imperfect and boundaries will always be tested by the unscrupulous, but I wish the best of luck in the world to anyone who would try and codify in a central document exactly what the skills of an average pilot should look like. We should never let up on the unscrupulous 'tho.
"Average" has little to do with predictability, more with a large sample result calculation/evaluation afterwards. But, hey, certification is not about registering history, though setting rules, how history should develop. Average has no place in that.