Drug Lords of America
August 6, 2022
Recession or Recovery?
August 20, 2022
Show all

Self-Driving Tesla: “It Will Try To Kill You”

Software hacking expert, Dan O’Dowd, founder of “The Dawn Project” joins us to critique Tesla’s Fully Self-Driving cars where they found they made a critical error every eight minutes. Then Ralph welcomes Steve Hutkins, the founder and editor of “Save the Post Office,” to ask why Trump’s Postmaster General, Louis DeJoy, still has a job.

Dan O’Dowd is the CEO of Green Hills Software and is the world’s leading expert in creating software that never fails and can’t be hacked. Mr. O’Dowd created the secure operating systems for projects including Boeing’s 787s, Lockheed Martin’s F-35 Fighter Jets, the Boeing B1-B Intercontinental Nuclear Bomber, and NASA’s Orion Crew Exploration Vehicle. He is the founder of The Dawn Project, which aims to make computers safer for humanity by making systems unhackable.

 

 

Not to put too fine a point on it, it is the single worst piece of commercial software I have ever seen. It doesn’t do anything useful. You take your perfectly good Tesla; it will try to kill you.

Dan O’Dowd founder of The Dawn Project on the Full Self Driving Tesla

 

Did nobody at Tesla ever test whether it runs over little children? They really ought to have a test like that before they put their software out. If they did do the test, they should go to jail, because they shipped the product knowing it does that.

Dan O’Dowd founder of The Dawn Project on the Full Self Driving Tesla

Steve Hutkins  is a retired English professor who taught place studies and travel literature at the Gallatin School of New York University. He is the founder and editor of “Save the Post Office”.

 

 

 

 

(Louie DeJoy’s) vision of the Postal Service is one that is not really to the benefit of the country as a whole. He has only one concern and that is the bottom line. He’s part of this long tradition of corporatizing the Postal Service.

Steve Hutkins of “Save the Post Office”

Ralph Nader Radio Hour Ep 440 Transcript (Right click to download)

5 Comments

  1. Mark Brandt says:

    I have a ton of respect for Ralph Nader, but feel the discussion about self driving cars has some serious issues. The person you’re interviewing has routinely misrepresented what Tesla is doing and has not told the full story on numerous occasions. He also has a vested interest in seeing Tesla fail, as he is attempting to market a similar product.

    The fact of the matter is Tesla vehicles can drive themselves, while on autopilot. Does the software need more work? Yes it does however, Tesla has never marketed this as Level 5 full self driving.

    In addition there have also been significant smear campaigns on the part of the auto industry, oil companies, the auto magazines, media, and consumer reports (which made me lose all trust in this publication). I’d say we need to discuss the data. How many crashes have there been, is the crash rate higher than it is for a real human, how many kids have been run over and was auto pilot actually on when the incident happened? The last point is one of the most frequent smear tactics I’ve seen. When the data is reviewed it turns out that auto pilot wasn’t even on (of course they don’t tell you this) or update their story.

    Lastly, one of the main reasons self driving is being pushed is the shortage of truckers and the limits on the number of hours they can drive. A self driving truck doesn’t need a rest break and will never need to stop except to charge or because of severe weather. Follow the money and you’ll know why this is being pushed.

    Thank you for all you and your team do Ralph. I learned a lot over the years from listening to your program.

    Mark Brandt

  2. Conrad Wilkinson says:

    Ralph, you are my inspiration on all matters since I first listened to your talks in the late 50’s. While I understand your concerns about Musk & his politics, I have to disagree with you on your assessment of the Tesla.

    I’m 85 & have driven my Tesla X for over 2 years. It is the best car I have ever driven. On the Fwy I engage the self driving but with hands on wheel. There r a few quirks, but for the most part I find it quite safe & enjoyable. Seems that the technology is here to stay…hope you give it another try.

  3. Michael says:

    Nader, please don’t fall for this charlatan.

    There are plenty of articles and analysis that point to O’dowd’s test being fraudulent, with FSD software not even being activated. People recreated the tests and the Teslas stop or drive around every time.

    If Teslas succeed at stopping for a poster of a child 99.9%+ of the time when users run the test, what are the chances that Dan O’Dowd’s test could have possibly had the Tesla fail at it THREE TIMES IN A ROW?

    Video from the Dan O’dowd test cabins seems to indicate that FSD was not even activated.

    Please look at statistics for fatalities per mile driven and honestly compare it to non Tesla cars such as the “Chevy Silverado” which is involved in 1400 deaths every year.

  4. John Puma says:

    Would someone explain the actual need for autonomous cars?

    What is the name that can be put on the essential subsidy of reduction of delivery standards (AND pension requirements!) at the USPS (or any other standards at any other government institution) … before handover to corporate owners? May we assume such handovers always include lenient “give back” (to government) provisions if the corporate snowflakes do not realize the profits they think they deserve?

    To Ralph (@ 1:03:52ff), modern fascism had better soon be recognized, precisely, as government of, by and for the corporation. Note the bipartisanship of the classic, essential fascist elements that do remain 1) as sense of societal rebirth – “MAGA” and 2) super nationalism – “exceptionalism.”

  5. Raymond Greeott says:

    Hello, Mr. Dan O’Dowd (and Mr. Ralph Nader)

    Like you, I have been having nightmares with the idea of too-simple, incompletely designed autonomous cars. My background leads me to suggest better engineering options – ones which can perhaps transformationally advance transportation safety rather than stagnate, reverse or seriously compromise our vital progress. First, we have to share a little engineering context. It is essential to mutually ground ourselves in some of the hard-won wisdoms of world-proven fail-safe systems design if we are to proceed forward, being part of the solution rather than magnifying the problem. I think I can assure you this modest bit of reading will be extremely worthwhile:

    I was the Lead Engineer, all circuit design, for the twelve famously self-healing, immaculately safe Boeing 757/767, 747-400 Control Systems Electronics Units. Congratulations – you are from Cal Tech, I am from Berkeley. When we come from excellent beginnings, that is only the doorway into the decades-evolving question of what we will authentically contribute with that foundation toward the goal of optimum safety vehicle transportation. The critical flight control systems I reference were the advanced yaw dampers and gust-response long-body stability augmentation systems, fly-by-wire spoilers, electric stabilizer/mach trim. (You worked on the BI-B; last I knew my unprecedented 767-originated radical single-stage quadrature-rejecting LVDT critical control surface position demodulators were being eagerly adopted, throughout, for the Boeing B I-B.)

    The spoilers were masterful industry-leading 3-unit split-authority, super-elegant analog systems (3 units so that no failure could accidentally deploy more than 2 spoiler panels), each one 3 part redundant: self healing so that no errant deployment would ever happen. The SST-spinoff, full time yaw dampers were two redundant units, each with dual dissimilar processors, forming a dual-dual overall system. Software for the two dissimilar processors was done by two independent engineers, using different software languages, who never talked to each other.

    Both processors in each unit had to always agree every 16.7 milliseconds on every math computation and action, or the unit would retest and flag itself as defective, alert the pilot with a first benign failure waring, and transfer control over to the equally designed backup dual processor unit for the rest of the flawlessly safe flight. Software was carefully modularized in small sections to enable rigorous design and fault isolation. We did all off that fail-safe multi-redundant design with a small intense group under great pressure in 3 years. That was only possible because we had decades of rigorous safety-critical flight systems experience, always selecting and promoting only the best people from within and world-wide, as world-best engineering architecture strategy foundations. The proof is that none of those systems ever make a control path error – not in 4 globe-circling air transport models and forty years.

    Your latest automotive design people claim to you that in their world nothing like what we did can be done. Too little time in their self-allowed schedule. Doing it right might take another couple of years and you could lose a lot of new-model car bragging rights by then. Oh well …

    Their priorities – as they expressed them to you – seem to amount to just obsequiously cranking out a simplex, to-me inherently unsafe, pseudo-autopilot design to promote and hype the next year’s sales model. These efforts will unavoidably resort in disasters across the land in the long run, all too much like what is currently going on with Tesla. From what you say these people seem to be trying to suddenly invent cheapo whiz-tech road vehicle “autopilots” – to me as a sandbox of dangerous sorcerer’s apprentices – without any of the hard-won disciplines of decades of brilliant evolutionary fail-safe systems engineering development experience proven obligatory in zero-visibility commercial aircraft automatic landing and other critical systems. Yes, we just are talking about little cars here, not aircraft, but, as I will show you, any sudden autopilot maneuvering errors (sudden steering hard-overs and all) of closely-traveling cars are much less forgiving, considering any possibility of driver override and safe correction (ZERO – none is possible), than our-always-attentive pilot correction times, during automatic descents, landings and all. So I would say that all of this is even a much bigger deal for 70 mph road vehicles than for airline transports – particularly when you tempt millions of cleverly over-sold naive drivers to believe they can now just read text messages . . . not drive.

    Without rigorous triplex backup architecture, any simplex road vehicle autopilot is just flying blind as to what any system error will do, willy-nilly catapulting you into the next fateful moment that you insanely allow it to drive in unsupervised control. It will drive you hard-over into oncoming traffic or into a canyon the instant it develops a corroded intermittent wiring connection and loses communication with its steering or road-position feedback sensor, as one of innumerable examples. The complacent driver will be hopeless to instantly correct the unexpected hard-over or divergent command. Every moment you operate one of these ill-conceived non-triplex-redundant fake “autopilots” is the equivalent of idiotically landing a Boeing 747 in a blind fog on a single-channel autopilot and a prayer, while the pilot is writing emails.

    Performance on the systems I gave much of my life to eventually lead has proven to be fail-safe for 40 years, and, 10 years later, established MTBFs set records of being 300-500% of what Boeing analysts ever previously imagined could be possible. It was all so advanced that after a year’s comparative design studies by a Boeing staff team, these designs co-opted new systems designs for the 747-400 ten years later, which chose our 757/767 designs because the industry, even with a decade-newer microcircuits, was still not capable of matching our systems’ design economy, proven performance, or coming close to their reliability 10 years later – some kind of a record in modern electronic design. I do not believe our digital systems could be hacked, as they were not reprogrammable by wireless – only by proprietary PROM mods in the factory shop.

    You now seem to be finding there are zero software or performance standards for autopilot cars, and that Tesla’s design standards are nothing but what we at the original Boeing would consider criminal negligence. I can’t believe no one has proven this years before. Only in 21st century America. . . . Whatever happened to real engineering? More power to you, crusading for progress here.

    I am a little curious how you have achieved fail-safe software designs, and “un-hackable” at that, in a time where you say they can now be addressed wirelessly by just about anyone (which strikes me, as a architect/designer in a more innocent previous era, as a fundamental, show-stopping mistake). Opening up that wireless accessibility is like letting mice, carpenter ants and, angry chimpanzees and Putin mess with the laws of God. I’m not flying there. I’m not driving there – except in my tank-like Volvo.

    I am all-skeptical that any one man or his company has created more than a niche contribution in what are being promoted as today’s masterful, “un-hackable” digital software control and avionics systems – in an era where decades-evolved company teams of generally brilliant, integral engineers with life-devotions to aircraft integration, flight testing and millions flights have found that iffy, famously elusive. Nonetheless, if that is your track record, more power to you. We should cautiously expect and deserve great things from you, in steering the auto/truck industry off the delusional courses they seem to be on.

    I may be wrong, but I suspect you may be one of innumerable software specialists, perhaps one of those job-shopping from program to program, building your resume as if you invented, almost single handedly, all those control and avionics computers you supported with bit contributions. Nobody does that. These are masterful multi-redundant systems, not hyped little Teslas: they are massive, brilliant team efforts of indescribably layered interdependent disciplines: critical multiply-redundant dissimilar systems architectures which you never even mention in promoting the idea of present or future autopilots, but which become ESSENTIAL; high-level calculus of control-dynamics equations synthesis, integral, ultra-sensitive lightning-strike and high-voltage-transient-immune noise-discriminating circuits, infallible software architecture and design, as a beginning. If you have devised strategies for actually making digital systems hack-proof, well great! That will be a major accomplishment, so I would encourage you to teach that strategy to everyone in denial about these realities, all the innocent babes now malpracticing this black art. We’ll write your name on the wall.

    A rigorously hack-proof processor system would be a real step-up in safety: actually chasing out all those sly recurring digital rats and snakes: potentially equaling the absolutely hack-proof natural physics of elegantly designed analog systems, which we at Boeing preferred in some of our critical responsibilities; which therefore should perhaps be considered as safer than digital options in many medium-complexity failure-intolerant systems.

    I am not at all dazzled with the formula-learned-disciplines of hopefully-diligent software coding, and the traditionally very fallible microprocessor systems now being hyped and served upon unaware naive people, as the coming Techno-Messiah. No one should be uncritical, when our home computers and almost every program stall, crash often, and must be rebooted and software-fixed over and again through repetitive updates, and the Teslas you report analyzing seem to be making a serious error every 8 minutes.

    The inconvenience of periodically rebooting PC programs is one thing; but when your single-thread autopilot throws you suddenly into a death spiral, that can be quite another. I understand the utility of digital systems, where they can be carefully managed by active operators in time-forgiving roles – such as in automatic landing systems where the pilot has to be ready to take over and go-around in a moment if full triplex system backup integrity is suddenly lost. We famously pioneered and perfected those systems in our most complex applications; but I am having trouble ever seeing them in instantly critical, absolutely unforgiving fatality-prone autopilot highway use – cars or trucks. I admire the idea of your possibly improving the art, toward perhaps someday bringing a single processor up to more like the near-infallibility of a rigorously constant, elegant, naturally hack-proof analog system. But let’s take a look at what the real operational safety issues actually are for ANY safe autopilot-controlled highway vehicle; and let us note that these issues are particularly critical for vehicle autopilots in oncoming traffic and all where the error-correction time of useful consciousness is ZERO. (Airplanes are quite a bit more time-forgiving. Consider this: “Air-force” fighters will never fly close wing-on-wing formations on autopilot: only with a really sharp human pilot. No time for autopilot hard-overs if you lose feedback, or if you lose guidance. Same with cars in traffic. Single aircraft spaced in the sky are much more forgiving.):

    The big picture in any hypothetically-transformational, conscionable autonomous vehicle design must be in getting the critical foundational systems architecture immaculately fail-safe: When building a highway-vehicle autopilot system, you either get this rigorously RIGHT, to the last bit, and somehow sustain it throughout every day of each vehicle’s road life, or you are guilty of vending high-tech death traps to over-sold consumers for your own cynical profit – which is exactly what may conceivably be happening in an unconscionable capitalist hoax soon being spoon-fed to millions of naive people by present players and people you may be contracting with. I don’t know how complex and self-healing those systems are and will be, but you have given me no indications they can be more than simplistic single-thread systems – and that is suicidal if you ever allow them to be operated in traffic as autopilots.

    Only the right, fundamental systems architecture strategy can deliver a system capable of infallibly adapting to what is new, and be self-healing, self-correcting, life-saving in any sudden challenge at any speed and condition – as I have had to do myself, more times than I can count in my driving life. This utterly cannot be done without mastering fully active infallibly designed triplex or dissimilar dual-dual quadraplex systems – of which the processors are only one of the critical mutually dependent sections. Other critical links – which equally determine moment-to-moment position, velocity, acceleration and deceleration, therefore computation sanctity within the ever-changing environment, are: ALL of the numerous forward looking, side looking, GPS-link, traffic-light Wi-Fi links (probably an unconscionable unnecessary hackable nightmare of what-if’s, in my book), and roadway guidance sensors, steering position feedback sensors, velocity and also acceleration sensors, all intimately integral to the dynamics of safe autonomous guide-path computation and control. And how will the pattern recognition capacities of your environmental navigation sensors ever come close to matching the perception and strange-situation adaptability of a self-preserving human driver wanting to get the kids home safely?

    All of these critical links and infrastructure will have to be a minimum of triple-redundant in order to have a viable operationally fail-safe system at any road speed – particularly the horror of highway speeds. An experienced engineer might rightly expect that to come true on the next cold day in Hell, eh? Anything less integral will make the 1965 Corvair look like a masterfully uncomplicated design, admirably with only that one gas-tank-mounting design oversight, not dozens of unnecessary, Rube-Goldberg, possibly fatal complexities, soon sold by sorcerer’s apprentices for anyone to enjoy an afternoon of safe driving. Give me one of those 1905 electric vehicles Ralph Nader talks about, just with improved batteries, safety harnesses and really heavy-duty bumpers, please, if I can have that choice.

    NOTHING LESS than full triplex redundancy at every one of these levels can possibly create a safe outcome when a sensor failure or intermittent wiring contact occurs. (Every 757/767 fly-by-wire spoiler LVDT panel position transducer was triplex – thank God – and triple-400 HZ-power sourced, to feed triplex independent analog computers. How are you going to do any of that in cars? What are the super-reliability, redundantly fail-safe cheapo steering position feedback sensors you are thinking will not to be intermittent for a lifetime of driving, or for a month, a year? They must be continuously monitored (cross-compared) every few milliseconds to assure fail-safe autopilot control loop operation, and prevent hard-overs. Plastic film potentiometers will wear in relentless service, become erratic with time; aircraft would never use such low-level commercial sensors. Single-thread feedback sensors are suicide. This whole idea of safe car-autopiliot operation just seems to be a Big Lie.) It is absurd and irresponsible to promote the notion that viable automotive autopilot systems might soon possible if we can just cherry-pick one factory-sample good enough so that we can video it actually stopping for the next dummy in a crosswalk – as you have proposed to Ralph Nader. Let’s get real!

    One reviewer of this interview thought factory Tesla’s had successfully stopped for a photo of a child 99.9% of the time. Job done??? But I am not a glossy plane-figure photograph. You report Tesla’s essentially can’t stop for dummies – I am a lot more of a dummy than the 2D photo they programmed their software to stop for. There seems to be an alarming difference in the ability to perceive such subtleties – or any kind of subtleties. And let’s get real: Is 99.9% pattern recognition good enough to protect your mother and your kids in traffic crosswalks and all. If there is a white baby carriage does that flummox the computer, programmed for that one photo image? Does it flip a coin? Kid on a scooter? If I drive quite a bit, like an Uber driver, and incur a crosswalk-civilian, say 3 times a day, I might encounter about 1,000 of them within 1 year. If with a 1 in 1,000 recognition-failure rate, That means I only kill about 1 crosswalk-pedestrian every year. If my system is somehow miraculously 10 times that error-free. then I will only kill one crosswalk pedestrian every 10 years. Do I nd the design engineer deserve a gold-star for that?
    I don’t know any human driver who has ever killed a crosswalk pedestrian in his whole lifetime of driving: there’s a comparison for you. Humans are excellent pattern recognizers, adapters to novel events. I get the feeling that a lot of pedestrians might no longer be able to survive 10 years, going for groceries, the park, small towns, the library, with system “design” standards anything like that. This could be you someday. And – if these gadgets are THAT fussy in perfectly-staged, static, showcase crosswalk events, what will their recognition percentage be with the random kid or dog or fawn leaping out from between cars, onto the roadway – 10%? Do any of them deserve to survive, while the driver is texting or taking a snooze? And – what about when your sensor “eyes” get dirty in the muddy rain storm, or that little drive you took yesterday down a long dusty driveway? At least a driver has windshield wipers and a brain to know if the vision is occluded! A muddy or dusty sensor will just go dumb, especially in poor light conditions, I guess, eh? Is all this done by some kind of infallible Lidar? Doubtful. (We never have any of these problems with 747’s following triplex-independent ILS radio beams onto a perfect landing, in any kind of fog or weather – no matter how long our continuously maintained and double-checked systems are in service. Try that in the family auto. I am so glad I don’t have to worry my life to trying perfect highway autopilot systems, to sell cars.)

    That one-time, or whatever, dummy test some people are pushing as a safety standard is a deceptive, meaningless photo-op promotion that any competently designed single-thread processor with the right optical pattern detection algorithms can easily do today – but it will offer NO sufficient safety in reliably doing that in daily service and through the inevitable system degradations of years of real-world neglect and abuse. All of the failings of single-thread systems apply – yet you have glibly said to Ralph Nader that If someone produced a car that would initially pass that test once, for a day or two, for you that might qualify as a viable design. I worry for all of us when you, posing yourself in a more and more influential public position, say that kind of thing. I’m sorry, but no career aircraft control systems engineer would believe you are a qualified systems engineer, let alone an industry leader in fail-safe autonomous control systems design, making a rash ungrounded, vapid statement like that.

    (And what if it is a baby carriage, a guide dog, a child chasing a ball across the roadway? Does our miracle pattern-recognition software, which still cannot recognize people dummies after 5 years of unconscionable road trials, get that and all the other possibilities right? Does Elon Musk, GM or anyone guarantee with something like their own lives and freedoms, that their “autopilots” will be safe, not kill you or anyone – for one day, one year, five, ten years, twenty (electric cars should at least last a long time) or as long as you or the next sucker drives it? If not, why are our completely out-of-touch so-called national and state transportation regulatory agencies allowing ANY of these infinitely unproven, continuously error-committing monstrosities on my roadways?
    Are you the masterful big-picture, career aerospace control systems engineer who is going to stand up and clarify these basic architectural absurdities for the car industry and the political/regulatory establishment, or just a software code fixer? I like to be very optimistic. We have serious work to do here, from the highest places of experience, ingenuity and integrity beyond self. What can you genuinely bring to the table? You must now take this rapidly festering national safety crisis seriously as a pandemic. You say you are taking on Tesla; what about the systems architectural fallacies of the rest of the similarly motivated industry, including GM and others you may be courting and contracting for? Who are you going to become, in the eventually revealed history of all of this? Dennis Muilenburg became a technician, not an integral engineer: an easy phony-engineering-authority tool for cynical financial stock management. He may be rich, but proved himself a national joke in congressional hearings: His legacy is death.)

    You speak optimistically of traffic lights that should be designed to be able to cleverly communicate upcoming red/green light-change conditions to every car. Isn’t that another open wireless invitation for destroyers to hack the service with disinformation – potentially all across the country at once? Am I ignorant, or is that notion just another crazy capitalist money scheme clothing itself in the techno-spin idea of ingenious infallibility? I keep liking how my own eyes have worked just about infallibly al of my life. And all I have needed is a couple of pairs of glasses.

    The Tesla flaws you describe seem to depict unconscionable single-computer, evidently single-thread “autopilot” designs. What they are selling for “$12,000” cannot be more than a single-thread, minimally designed, in my view criminally conceived, JOKE of a real fail-safe autopilot: guaranteed to kill untold numbers of people. (Others have more correctly estimated over $100,000 – or multiples of that – to authentically produce the triplex or quad-complexity units (let alone the triplex infrastructures required to support them) unavoidably essential for autonomous safety. (What do you do when driving willynilly along, at road speed and for whatever reason the triplex fail-safe infrastructure suddenly blinks out, or just dies? Do they provide a box of good prayers with these infernal gadgets?)

    Triplex fail-safe systems presently seem to be viable projects appropriate only for very large airplanes, NOT daily cars, NOT diesel trucks. ) Any single-thread, single computer, single line of sensors, single feedback network, single actuator will be unsafe in active feedback control at any speed. Repeat this across country and the world in millions of vehicles and you could be complicit in making the criminal negligence of Boeing’s infamous Dennis Muilenburg look like minor negligence. We would become worse than the laughing stock of the engineering world. Whatever happened to integrity?

    We at the original Boeing engineering company, back when you were still in class in high school or at Cal Tech, prevented unacceptable, dangerous single-thread or dual-system failures by using the mighty 747 as the ultimate test bed to pioneer and perfect unprecedented Category III B zero-visibility automatic landing and full-regime flight control systems, using only elegant analog designs where they were best, or, when necessary, dual processors programmed in different languages: dual-dual cross-check and instantly self-healing automatic redundancy-fallback approaches through and through.

    I don’t want to fly across the Atlantic or drive autonomously on highways on safety-critical single-thread designs of any kind. Duh! Those failure numbers are suicide! Yet I have never even heard anyone who blithely promotes automated cars as realistic in our near future – including you in your brief conversation with Mr. Nader – even mention the concept of fail-safe redundancy, or essential redundancy management. That indispensable architecture is the only foundation of critical systems safety. What gives?

    To just meet the first redundancy standard of failing passively – that is, without a control-wheel hard-over from loss of position feedback or a computer insanity error – WARN and and let the driver suddenly take over, you need two redundant dissimilar-hardware/software computers. Otherwise they can both male the same miscalculation, and agree it is rational. But of-course that fail-passive strategy is totally non-viable in any automotive autopilot system, because it requires INSTANT driver takeover, perfect thinking and smooth reaction, a quiet state which a complacent driver will not muster again until he is dead. To fail, but instantly still maintain safe-ongoing active automatic control for the rest of the highway mission, you need an extreme-reliability triplex or dual-dual digital system. These principles were pioneered and carried forth with immaculate success in 747 Category IIIA, IIIB autoland systems, and the full time critical 757/767 systems I have described.

    If you somehow lose, say, steering servo position- or lane position-feedback signal integrity, or your processor falsely interprets GPS position, the computer will immediately drive the wheels hard-over, because it thinks it needs to try harder to move the wheels which do not appear to be moving the vehicle, as commanded, to the right place. This will immediately crash you into oncoming traffic, into a basalt cliff, or through the guardrail, down into the canyon on a mountain road. Which do you prefer? There will be no warning, only an instant hard-over maneuver. Elon Musk will say: Well it only happens once in a while, maybe once or twice in your lifetime in a Tesla if you get a good one, and you might have fallen asleep and done that to yourself anyway. We think we can eventually be a little bit safer than you with our $12,000 system, especially if next time we charge you a little more and you trade-in for the next genius model. Three Teslas down the road you might be even better. Just keep upgrading. Looks like you haven’t crashed yet. That’s a pretty good test, isn’t it?

    Again – I get the feeling that anything like this basic level of triplex systems engineering design wisdom is deemed too “difficult,” too expensive for cars – and that no one would even consider it. It could cost more than the vehicle, unless perhaps miraculously micro-miniaturized. After all, it is easier to falsely convince the ignorant public that high-celebrity single-thread systems are techno-magic, rake in the money, hire lawyers – and run. To be safe, rather than unsafe at any speed, you need triple- or quadruple-redundant sensors, including redundant GPS and any other guidance you rely upon to try to drive somewhere without being the sentient one paying attention.

    When I lose a headlight on my Volvo, I can still see with the other independently circuited headlight. It can be dangerous to drive at night with only one headlight. In my old Volvo an intermittent connection somewhere in the sensor wiring, or a burned-out light bulb, does not suddenly drive me hard-over into oncoming traffic – or cause me to fail to see someone in a crosswalk, or ignore or confuse anything a driver needs to sense fast-moving reality every second. Don’t you see! There seems to be no justification with current technology to precipitate such a level of complex digital control vulnerability unnecessarily into our daily driving, before it can become possibly technically and economically perfected – which seems to be nowhere in sight!

    The wise safety-promoting engineering strategy should be to keep the sneaky complications few, the process as simple and common-sense as possible for the driver. If automated systems cannot be guaranteed to remain full fail-safe for as long as the car is driven, they must not be allowed on the road. And what about neglected or inadequate long-term maintenance? Ten years and more from now, by what abstract act of God will any of these heavily-weathered ultra-complex, difficult to authentically maintain, potential monstrosities ever be certifiably safe? (How many of their obsolete microcircuits will still be available, and at a reasonable price? Is your local mechanic new-hire or old guy going to do the chip-replacement soldering? How much will a whole new board cost you? You wanna buy a used Tesla and drive it on autopilot 5 or 15 years from now? Entropy!!! Are our roads and neighborhoods about to progressively become techno-nightmares? Pray tell: what practical, life-assuring, comprehensive daily fool-proof self tests will the new “geniuses” provide drivers. to protect themselves, family, and hapless street-using citizens with? What nightmares are we creating in our infantile techno-enthusiasms for making a lot of money?

    Commercial aircraft must be meticulously maintained each day, periodically completely overhauled and shop-certified, and federally re-certified to remain in continuous round-the world operation. Do you expect automated cars, out in the weather, in neighborhoods where they can be tampered with, and poorly maintained by thoughtless average people who know nothing about them, to somehow be innocent miracles of safety coming at you and me 5-10-20 years down the road, if they somehow survive without a fatal crash by then? What is your miraculous maintenance re-certification and systems driver-performance verification test plan for continuously safe autopilot certification and INSURNCE? Do you somehow have arcane digital patent remedies for all of these quite unnecessary, nightmare complications to our highway transportation systems? Will people someday long for the day of basic, almost infallible real cars; the Model A? Are we dealing with selling glib pipe dreams for a little glory, and change?

    Promoting less than adequately-redundant, failure-immune automated driving systems is unnecessary, and disastrous karma. We don’t need those potential monstrosities any more than we needed the criminal 737 Max-Disaster bean-counter-directed “design” shoved to us by Jack Welch-wannabes who took over Boeing, criminally devastated the great Bill Allen/Joe Sutter engineering company into a sick private cash cow, destroyed Boeing and many lives. It’s your choice now, just like it was with the self-proud Boeing mis-managers: big corporate money, personal cash; or millions of safe lives. Temptations are dangerous, especially when they are personal. Please be careful which side you are responsible for promoting. If there are ways of creating the triplex fail-safe self-healing autopilot systems we must have before we can attempt autonomous cars: SHOW ME. There are compromises, lines a real engineer will never cross. (The common, tycoon-wannabe only temporarily conceals a rather different agenda; his emptiness.)

    The only thing simplex single-thread systems can possibly do, responsibly, is back up a comparatively marvelous human driver, who has vastly superior awareness, experience, pattern recognition, and adaptive capacities. He can eventually be deprived a driving license when he fades from competence. It’s not perfect, it can be improved, but a human is fundamentally superior to anything like the superficially designed machines which seem to be failing your most basic driving tests, over and again – and will develop geometrically worse track records as they proliferate space-time like cancer.

    Boeing always puts pilots in prime responsibility – with electronic systems strictly in secondary backup unless they can be masterfully designed to be so rigorously redundant that they will always be infinitely safe – as our full-time mandatory 757/767, 747-400 systems have proven to be in untold millions of flights and world navigations for forty years. The systems I have described have not killed one single person – though over one million fly on them every day. If this is not done with cars, you will continue to kill tens of thousands of people – and just write it off as inevitable, unavoidable, like Musk is trying to do. (That is sociopathic.) I know different.

    Again – in your brief conversation with Ralph Nader I am not hearing you talk at all about rigorously redundant systems, while you seem to glibly promote the future of self-driving cars as somehow a rational, necessary development whose time is coming right around the corner, perhaps with your next Dawn design. What am I missing?

    Who – pray tell – needs to free up drivers, as you glowingly suggest, so they can idly text and read the Wall Street Journal instead of managing the critical driving, while they are flashing by me in the opposite direction at 120 mph collision speeds? Who needs the insanity of a possibly single-thread computer which can suddenly latch up into a hard-over, full-throttle or turning maneuver on “autopilot” some day? Passive drivers will always be a day late and a lifetime short on overriding such a random runaway. Oh – we are supposed to be able to now indulge in the valuable pass-time of texting, reading our stock reports. That should speed up our reaction time quite a bit. There is utterly no way for passive driver to instantly take stable creative command of a suddenly suicidal vehicle autopilot system which is not designed infallibly with the full self-healing fail-safe, rigorous triplex fail-operational integrity that we perfected for our extremely expensive Category III autoland systems and others – which you should at least be superficially aware of, or you are not what you claim to be. You got that solution patented in your pocket too?

    None of this in my world! Would you want to replace the veteran airline pilot, and his wise, wide wind-shield view and lifetime experience, on your next trip, with a $12,000 box of the “internet-purchased” software toys and wires, contrived by Tesla or the next program-schedule obedient junior sorcerer’s apprentice “engineers” cranking genius out the door, untested except with dummy photos – unproven as an authentic safety improvement in any way? Not even today – let alone those interesting years down the road. And in blissful denial about all of it.

    Any simplex single-thread systems need to be relegated strictly to the simpler backup chores like automatic emergency braking and possibly improper lane-changing driver ALERTS. Any sane driver needs to rely first on his own comparatively astronomical wits for every human decision made along the way, be present, on top of what ever is going to happen unpredictably three seconds from now, and only rarely rely on someday-fallible computers, mazes of electrical wiring, corroding fuses and eventually-degrading sensors for desperate backup support, in the rare case the human driver does something unintelligent. (Oh my god – my electric Volvo windows quit working again on the way home the other day. At least i didn’t die from that.)

    Backup emergency braking and lane-supervision systems, etc, can relatively easily be engineered to be actively self tested, as a little routine checkup by the driver, for essentially full functionality, say, each day. Once I have very simply test-validated significant functions like emergency braking and lane divergence each day or trip, then with an integrally designed high reliability system I will have negligible statistical failure of my safety backup not being there if I become negligent and need it. That could perhaps eliminate 90-99% of the consequences of any rare lapse I may eventually have. That single, truly authentic, bottom-line daily self check of my simple back-up safety system, then mostly supersedes, compensates for a world of unavoidable eventual lifetime failures in more complex systems that only pretend to be forever autonomously fail-safe.

    There is therefore at least some potential to perhaps catch a sleep-nodding, disabled or intoxicated driver with a variety of warning wake-up defenses with a simple, single, simplex system – but ONLY one routinely tested once a day. (Thousand times better to have a simple intoxication/driver alertness monitor, eh?) The recent authentic self test, combined with high reliability hack-free design and an alert human being for a driver, can, in effect, statistically more than replace all but the most infallibly designed multiply redundant driverless autopilot system fantasies.

    After an authentic daily functional test, I then obtain natural integrity throughout the rest of that day or trip. For example, I would look for it to brake smoothly when I slowly approached my garage door or another car in front of me at an intersection. If it ever did not, I would damn sure stop it myself. I am the manager of my own life! And then get it FIXED. If I am one of the luckiest car owners, that failure might only occur once during the lifetime of my car, but the daily functional test would be delightfully life saving – perhaps for many people.

    Lane changing functions can be designed to authentically inform the driver they are working via simple active driving tests – showing whether they are actually working in relevant L-R lateral-move conditions. It requires a little more thinking but minimal complexity, while its simple elegant design creates maximum possible safety, and at minimum cost. I would buy a car like that every time – perhaps especially if these were elegant, utterly non-hackable God’s-physics-programmed and predictable, integrally designed, almost never-fail analog systems.

    Better yet – I really just need to be responsibly sentient, harnessed-in, safe against most oncoming collisions in my old Volvo, without any of that false electronic security. It is beginning to look like robust tank-like defense against autonomous cars should be my continuing priority. All considered, I think Volvo created better safety solutions, vastly superior to what you seem to be proposing, decades ago. (The more things change, the more they seem to go backwards, eh?) All it requires isan awake driver concerned with his own life. Simple.

    Why possibly create a Rube Goldberg digital rat’s nest of techno-vanity, when a wise engineer can frequently deliver ultimate safety more simply and directly with a common-sense, elegant, almost infallible purely analog design? Or just the pure almost-infallible mechanical design of my Volvo? You have evidently spent much of your life trying to heroically civilize notoriously glitch-prone, hack-prone, weird incident-prone digital systems. But there is no glory in unnecessarily committing these unseemly liabilities to smaller systems, you see. In smaller, elegant analog systems, engineers can put full attention on the fully-analyzable integrity of predictable physical component relationships and mathematics. (The perfect safety-critical 757/767 fly-by-wire spoilers are the Gold Standard of industry design examples: complex, yet super-elegant analog computer designs,* difficult or impossible to equal with an iffy digital rat’s nest. Where there are rats there are snakes. Integrally designed, naturally constant analog physics {rather than a someday-inconstant – whoops, we just crashed – hacked or self-confused digital system} has none of that unnecessary and unconscionable safety liability. Real engineering sorts this out, and never chooses the inferior approach.)

    Concerning the greater value of human safety: is your expressed aspiration of logically entertaining full autonomous driving systems in the near future a transformational human safety program, or a computer technician’s next fantasy toy box? A personal business finance manager’s money bin?

    Please – stop for a moment and think of the massive multiple-redundant, perhaps perpetually hackable, critical, arguably unsustainable overall infrastructure, necessarily including multiply-redundant fail-safe wi-fi GPS, semi-reliable road guideway strips and all, which will be required to support authentically safe auto- and truck-autopilot operation decades from now, eventually for hundreds of millions of vehicles every day. Yes, this will deny some nearly minimum-wage truck drivers from burdening fleet transport profits. But at what cost? What would the best engineering professors at Cal Tech call a system with such unprecedented external and internal liabilities? “Designed by idiots,” rings out of the past from a memorable professor, Robert Steidel at Berkeley.

    I continuously marvel at how alert oncoming human drivers, with their lives constantly at stake , passing oppositely at 60 mph -j close lanes, are usually miracles of reliability in my presence compared to anything like the absurd Tesla “high tech”-genius-posing, half-baked inhuman systems you cite. and seem to possibly be only marginally intending to improve. (Yes, Nichola Tesla is rolling over in his grave. How is a sham so easily promoted in such a great name? Did Tesla deserve this? Do we? Do our families and their children . . . ?)

    One day not long ago one of those drivers, veering head-on into my lane – but who could hear my horn – veered back into his lane in the last 2 seconds, and saved both our lives. How many of your micro-clever macro-stupid half-vast pseudo autonomous systems can hear my desperate horn, veer back to a sanity it has unconsciously, dumbly abandoned? Add that to the stack of impossibilities. They have no human life force, no history, no skin in the game. Neither do they value me. And, further, if I were at that time riding in an autopilot-driven car, there would have been no horn on my part, no innovative evasion toward the side of the highway – or off it – like I was doing to save my life. There would be nothing: Nobody writing this now to you. Dear Sir: if we are not very careful, all of our cleverness here is, once again, quite unnecessarily adding up to Nothing. This could happen to You. You could suddenly become Nothing.

    Bureaucratic and non-existent federal and state automotive-autopilot standards are still empty of these hard-won realities; clueless. We at Boeing used have to to be the FAA’s and British CAA’s educators and leading partners; we were universally championed, back then, as out-front, rigorously serious – and the safety-driven Leaders in our world-best systems development science. You only learn integral balanced design by the rigorous processes of having to invent it by necessity. Our world-superior original track record immaculately proves that. That was the famous Joe Sutter Boeing Company.

    The immaculately successful Boeing 777 control systems designs spun off from our elegant pioneering 757/767 work, over a decade before. With your spoken background with Boeing and other advanced integrated systems you should, at least partially, be in a great place to question and utterly transform the conversation about any phony notions that GM, Tesla or anyone else has about simplistic autonomous vehicles, pregnant in the safety-stupid unregulated capitalist financial system. Reducing company values to the narrow paper values of a greenback destroys the company, and us. Our personal character, our true values are all repeatedly tested through life before we get to Judgement Day.

    How will you ultimately compare with the Integrity Standard: Ralph Nader? I like to be very optimistic about hard working competent engineers maintaining their integrity through life, not ever selling out. That was a tradition I lived in and modeled with the best engineers in the industry. Will your proud creations be safe at any speed? Are you good with tech, but perhaps ultimately just comfortable to fall in line like the tragic Dennis Mullenburg, to cash-in? I hope you agree with the higher standard that Joe Sutter, and the creators of those airplanes we once mastered, dedicated their amazing professional lives to uphold. Your influence on engineering strategy could potentially save or kill tens of thousands of people per year across the diaspora of American highways. Dennis Mullenburg has only killed about 360 good people so far. How will your ultimate tally compare? These are serious questions.

    You may potentially be in a position of important influence on the future of automotive guidance systems, especially the GM systems you seem to be perhaps already influencing. I I am not yet aware of how the real complexity/cost/design infallibility challenges can be expected to be authentically met on the technology-hype corporate marketing course we seem to be blindly on.

    it has become generally recognized by many engineers that without costing the nation’s fortune, sufficiently redundant fail-safe common street vehicles and infrastructures are probably little more than a salesman’s pipe dream. Therefore, until a miracle happens – as engineers are wont to say – it seems the only honest approach is to shun hyperbolic fantasy approaches, exemplified by that arrogant little “engineer” Musk and his wanna-bees, and focus with integrity on keeping it simple with minimally complex, physically-robust driver-reliant designs, or perhaps in some ways genuinely enhancing safety with modest, simple safety-backup systems somewhat as I have described.

    The generally wise Japanese have been engineering a similar pragmatic approach, I believe. If GM, and others you may have advised, lurch forward with highly touted fake “autopilot” systems that could be actually much less safe than more-elegant, common-sense minimum-infrastructure driver backup systems, what would your best former Cal-Tech professors think you finally amounted to, with all your opportunities – just a pile of money? I am not underestimating you; we are just getting serious here, are we not?

    Which of these competing idealisms now pulls you? Are you still an engineer?

    Again: I really get the feeling that rather than making deceptive death-trap Rube-Goldberg techno-fantasies out of our cars with glittering microchips, those tens of thousands of dollars per vehicle/infrastructure would be far better spent, with just a few dollars, by simply adopting Volvo passenger-envelope mechanical design safety standards (and doing it without ill-designed, iffy airbags that can false-trigger, explode violently, kill you.)

    I think there would be far fewer accidents than a fleet of inept error-spewing “autopilots,” fewer fatal accidents, and fewer injuries and fatalities when our – I think – more generally aware, sentient, adaptable, flesh-and-blood self-preserving human drivers know they must take full responsibility every time they buckle up and take the wheel. Do you really have an autopilot that integral, that experienced, that almost naturally fail-safe?

    If you really want to seriously improve auto transportation safety, I think I might suggest that a real genius lobby for a much cheaper probably more effective strategy of keeping it honest, driver-simple, damn near fool-proof, with a little extra, legislated mechanical safety engineering. Call it old-fashioned if you want; I call it real – rather than the glittering Tesla safety mirage coming on. All kinds of similar things can also be done to improve basic highway, city street and traffic infrastructures to dramatically improve driver, passenger, bicyclist and pedestrian safety: Study today’s Europe. But that is not the capitalist agenda here in the United States, and it is not going to be. That would be thrown out as socialism in this country. Not worth considering. Profit First. We lack a societal commitment to safety, health and welfare in this free-enterprise vending machine of a country. How do we clean up, transform above the psychological black hole we are in? I hope you somehow do your part. Ralph Nader does. He is integral like Boeing’s 747 CEO/attorney Bill Allen, Boeing’s life-dedicated engineer Joe Sutter, and the real life-values-committed professional engineers of this world.

    The future of wise or bad decisions, interestingly now passing through your hands, can be too-easily be paved with something like a million dead bodies. We are killing about 40,000 a year now. If we are to be part of the genius of progress, we need to be wise, not micro-clever like our little Musk and his perpetual wannabes. See you on Judgement day: part of the jury.

    The engineering mission of the Boeing 747 and its original line of immaculate twin-engine successors (excluding the 737 Max Disaster), was to slash the hull loss rate of the early, almost purely mechanical 707, by a factor of 30/1 (down to only 1 hull loss in every 15 million -15,000,000 – flights *) – even while increasing its flight-critical electronic complexity by something like a million fold. (Our design standard for any probability of any of our complex circuits or systems contributing to a flight-safety incident was less than one in one billion. That is how we perfected the almost impossible art of rigorously safe twin-engine 767 flights across the Atlantic, where we captured 70% of the North Atlantic traffic and never had one safety incident in decades.) Try that sometime in aircraft where wing-to-wing lightning strikes are common, and operational and environmental stresses are formidable. We did not stop short at just making flight a little safer than it was: we concentrated decades on revolutionizing flight safety and efficiency to ultimate levels. We got rid of the human flight engineer and most pilot error with brilliantly engineered, revolutionary systems – previously impossible. Worldwide, airline crashes sometimes do not kill even one single plane load in a single year. American highways ironically kill 40,000. That’s almost equal to the total 47,424 direct American casualties in the entire Vietnam war.

    * That, by the way, means you can fly without probably ever crashing even once, from here to Hong Kong once a week for 288,461 years (yes, that’s right) on your choice of 747-400, 777, and the pre-737 Max Disaster model. You like Hong Kong or Tokyo? Got the time? Got milk? Either one of those trips will be as safe as driving about 34 miles on any American roadway in my Volvo. (That’s what we did, unknown in our villages and neighborhoods as we did it.)

    So, not taking advantage of realistic engineering opportunities to dramatically improve highway safety is like failing to prevent a major war every year or two. What are our political values? Anyone who says that if autopilot-driven cars can be made just a little bit better than human drivers, that will be a laudable commercial product – is just another baby Elon Musk clone: not engineering humanity toward the priceless unique opportunity of transforming this unnecessary war-equivalent bloodbath. According to your statistics, the current seemingly simplistic Tesla-like systems appear to me to be on their way of unconscionably making the carnage far worse, not better. Even the infamous G.W. Bush didn’t create the level of American carnage in the Middle East, in 20 years that I sometimes suspect half-witted phony autopilots are capable of, on what I am able to read of their present course. What legacy are you going to leave for yourself – and Cal Tech? If you have some opportunity, as we did, to powerfully transform transportation history – if you are still a professional engineer rather than morphing into another glib businessman/banker – DO it. Earn your possible legacy.

    The American highway carnage is so unconscionably horrific it is possible that progress of major proportions could be done here. But that can ONLY be potentially achieved, without just pretending to do it, and making things worse, by either possibly perfecting long-term sustainable, elegant triplex fail-safe auto-piloted vehicles and integral infrastructure systems, or by keeping it simple and un-confusing by making autos inherently much more mechanically un-crushable, like Volvos. If you do not perfect rigorously sustainable triplex fail-safe autopilot systems, self-preserving human drivers will always be safer – and every moving car has a free one.

    You have a grand engineering job on your hands, similar to the one we emerged victorious at – not at all just a software-outfit-fix recipe job. This will take rigorously grounded, multi-discipline career-devoted hard-core engineering teams of the highest character and long term experience, and hand-in-glove federal transportation safety agency coordination and guidance – not just a bunch of sorcerer apprentices, entrepreneurs like Musk, or corporations like GM, vying to sell cars in glib ads hyping the next specious “autoplot” techno-Jesus mirage.

    Your first daily task, from now on is this: Integrate and optimize whatever you contribute today towards saving One – just One, the next one – vehicle crash over the rest of your lifetime, and legacy. That is what all the original Boeing 747 engineers did – from Joe Sutter – across the board. With that honesty you just might eliminate most all of those unnecessary fatalities, like our Boeing culture would have set themselves to exploring, unceremoniously doing. We are at the threshold of urgent progress – or tragedy.

    All my best wishes and expectations, Mr. Dan O’Dowd, and Mr. Ralph Nader.

    Raymond Greeott

    Confidential to Ralph Nader:

    You wrote your wonderful dedication, Unsafe at Any Speed, as a public citizen, to socially engineer a few dollars of safety-harness and common-sense design rigors into American auto design. Thanks, Ralph, for actually saving my life three times. Simple shoulder harnesses are almost infallibly safe; not even a single bruise.

    Being educated at Berkeley, not trained as merely a high-math technician, I never aspired to entering a company where optimum engineering for the common good was not the gold standard. Boeing in 1966 courageously launched the best platforms for the new transportation age, using the 747 as the development platform for the future of generations of commercial aircraft, for the first time optimized to be almost infinitely impossible to suffer fatalities, and to simultaneously be the most fuel and cost efficient. I sensed Boeing’s unprecedented opportunities and responsibilities in tackling the 747, got myself hired and moved to Seattle. Once we got off the stupid SST techno band wagon, we dedicated our lives to a legacy of leading the world in perfecting optimum maximum efficiency and safety mach 0.82 air travel. We perfected and built the best planes and survived on a 3.5% company profit margin for three wonderful decades before the Jack Welch clones took over top-down control to make our great engineering devotion into a cynical, sociopathic Scrooge $$$ money machine – damn a few crashes now and then.

    To me, the insights we immediately need to transform above the unregulated capitalist temptations of selling fake- autopilot-driven cars and trucks are best learned from those decades of Boeing immaculate-record, safety critical, full time, impossible-to disable, self-managing control systems. I have summarized the basic architectural essentials here. I have not cluttered my life studying what level of “autopilot” design car makers are actually working toward, but what Dan O’Dowd is publicizing seems to reveal a rather clueless sorcerer’s-apprentice engineering mentality, subservient to sales and finance types gunning-down priorities from the stock-pumping head shed. That is the black hole entry into end of the American experiment.

    We are at the threshold of either perhaps using advanced electronics to transform us into a vastly improved safety era, or using miserably cranked-out driver-confising and disabling Rube Goldberg techno-complexity trash as a safety mirage, a hype campaign used to sell cars the way they put shark fins on 1959 plymouths. If fake-safety “design” goes forward on anything like the Tesla course, as described by Dan ODowd, this can very quickly become a debacle almost overnight that will make what you crusaded against in the 60’s look innocent.

    Unsafe at Any Speed needs to be written for this brave new age as if never written before, as grounded in my personal reflections, above. We Elders have got to get this industry transformed, re-engineered to the best authentic overall safety and performance compromise. (I hope you drive an old Volvo 240 wagon like I do.) You are the only guy with the deep integrity to do it – and I happily note you have not lost your great sentience, incisive wisdom, integration – instinct for the critical architecture, the direct simplicity of the fail-safe solution. The key guys you need to know as resources, who will back up what I am saying to the hilt, are between 80 and 95 years old – so time is ticking fast. I can tell you who the two absolutely world-best fail-safe flight safety systems architects are. They derive from mastery of the unforgiving, mandatory, full time SST yaw-damper stability control systems (etc.), and are architects of the flight systems miracles of the 747, all the previously impossible, now legendarily perfect systems that followed. They are Boeing’s world masters – the life-dedicated architectural Master creators of all the immaculate aircraft control systems and avionics that I’m sure preserved and saved your life perhaps innumerable times. They are comprehensive, wonderfully cordial, Mt. Rainier summit-leading, complexly integral creators at all human value levels.

    They will tell you what they think of amateur, less-than-rigorously designed vehicle “autopilot” adventures. I seriously doubt any of them will see the overall operational integrity of such a transportation network either realistically feasible, or safely sustainable with aging equipment in the long run. The Horror. The Horror.

    How are you ever going to create a triplex-redundant rigorously fail-safe infrastructure across country? The failure rates could be expected to rise exponentially with unsupervised vehicle age. Great! – let’s sell millions of those single-thread monstrosities covertly to the American Public! They won’t catch on ’till we’re outa town – halfway to Mars! Let’s cash in!

    I have absolutely no quarrel with Dan O’Dowd. He may have some deep integrity in him. But if it is really there in an integral overall vision and lifetime experience rather than as a limited software engineer, which I am a bit skeptical about, our best support is to harness and focus whatever it can be in the essential, transformational direction. He is not at all coming across as the stature of the greater, comprehensive engineers I can recommend to you. (Neither they nor I will want to be public figures.)

    Diligently getting the software language coding right without permitting the mindless, soulless processor machine to become confused, misinterpret the data, derail like a toy locomotive off the trestle from rails of the instruction task, or do something randomly unintended or insane, is a niche skill, more secretarial, one of faithfully translating exacting engineering into processor language code. It is an only recently necessary skill generally abstract from the real engineering of functional architecture, high calculus of control dynamics control law synthesis, energies and physical relationships, control strategies, control logic, critical redundancy management and consequences. It secretarially encodes language into the machine, literally by instruction from the system architect engineers, the real engineering. Being good at that niche does not make one a world master of aircraft control systems mathematical, architectural or detail design, except in the very worthy sense of not making mistakes in software code strategy, eliminating hacking worm holes, and implementation. There are so many artificial vulnerabilities and gotchas in this realm that I think no one wisely uses a processor where more elegant physical circuitry is safer, and competitive. That is unfortunately not the trend. I don’t know how much Mr. ODowd can wisely influence “autopilot” architecture designers, beyond the big challenge of eliminating hacking and random code errors. He is rather grandly projecting himself into the limelight of Center Stage. I think we just need him to rise Highest into Possibility, at this fateful continental divide for all of us, in how he projects and materializes what he may now be encouraged to contribute, in the original hard-won engineering tradition, behind the scenes.

    I’ll be in and out of my office and residence in long spells this summer, and through fall. If you wish to reply, I will be monitoring my email perhaps a bit intermittently. If you are inspired to communicate, I can supply my land line and my wife’s cell as a backup upon request.

    Most of all, I hope you are wonderfully well. To me you are almost a world. I live an integral life. I am a researcher, now deeply invested in virtually eliminating the ironic prevelence today of the brightest people, usually working indoors, having an almost 100% chance of dying from Alzheimer’s. (Our average US odds are 50%, between 60 and 85 – but twice that – 100% – for most indoor workers.) That is what’s happening. I am out front in this, personally plan to live creatively for another 40 years if possible, and can clarify some world-best natural solutions if you would be interested. Most normal “aging” is an unnecessry mistake, but rather easy to avoid. MIT science and all. I have reversed my own biomarkers about 30 years across the board. Young middle age now. You can too.

    My honor,

    Ray Greeott