Moravec’s Paradox makes the insightful observation that because modern machines can perform complex calculations like finding the square-root of 3,492 in a spit-second, we naturally make the incorrect assumption that machines can also perform “simple” functions that a one-year-old child can do.
A one-year-old child giggles and laughs when I play “peek-a-boo” by taking my hand away from covering my face and saying “peek-a-boo!” then putting back my hand to cover my smiling face.
A young child instantly grasps the nature of this game.
But this is many times more complex in the individual instructions that must be broken-down for a machine to duplicate this same child’s game…to even begin to approach a smiling face and cheerful voice that could elicit laughter from a child.
The one-year-old sitting on the floor building a small tower using square wooden blocks is an activity that seemingly is so simple that a child can do it. Yet for a machine this child’s play is many times more complex, requiring the computer code-language instructions that must be programmed into the machine involving the concepts of the recognition, grasping, positioning, balancing, and not knocking over the other blocks as the tower is built.
Humans have roughly 100 different nerve cell-types, billions of neural networks, and trillions of neural network connections inside the limited cranial volume of the mind/brain.
If we theoretically attempt to manufacture an Agatha Christie machine robot that can artistically create bestselling murder mystery books, we would have to produce the same high-level independence of yes/no decision-making.
A human-like machine using artificial intelligence would have to be able to create the physical context for a child to correctly recognize that “peek-a-boo” is a humorous game, and equally difficult for an Agatha Christie machine robot to differentiate that some brilliant storyline plots create spellbinding suspense, but an infinite number of alternate, less brilliant plotlines do not.
Housing Construction and Debugging
An interesting aspect that is part of the career of building construction that I worked in, is that houses are too large in size to be fully assembled in one central, factory location and then shipped to their individual building sites.
This means that the debugging that can occur on the mass-production assembly-line during the initial trial-run phase, to get the manufacturing of each identical product assembled in the most efficient and cost-effective way, cannot be utilized in new housing construction because of this inhibiting factor of large size.
Everyone knows in new building construction that the architects, engineers, and interior designers cannot invest the time to produce perfect design plans and still economically make a profit.
Every new building construction project, starting with the smallest size house (two-bedroom/two-bath), must individually debug the problems and mistakes onsite as they arise, that otherwise would be identified and corrected in-mass through upfront, initial trial-runs that are the normal process for identical products being manufactured in large numbers on mass-production assembly-lines.
The key point here is that the issues and problems discovered as they arise unexpected during the various phases of the new housing construction, can beneficially be pushed backwards in time on mass-production assembly-lines to the highly condensed, upfront, trial-run phase…because the products can be shipped after they are fully assembled.
This vital feature eliminates the respond-in-the-reactive-mode, daily occurrence in housing construction of “putting-out fires” as unexpected and unanticipated problems are unearthed as each new phase of the work materializes…which otherwise would be unacceptable on a daily basis in mass-production manufacturing.
Extending this concept to the creation of the universe at the Big Bang, how difficult would it be to manufacture an essentially error-free physical universe without the benefit of a trial-run?
The speed of light generated at the Big Bang has one optimum numerical value, but has an infinity of possible “no” choices that were deliberately rejected at time t=0.
The force of gravity generated at the Big Bang has one very specific value that coordinates and integrates within an accompanying suite of other mathematical constants and values in the cosmos, with an infinity of possible “no” choices that appear to be deliberately rejected according to recognizable, upfront intention…again at time t=0.
The expansion rate of the universe generated at the Big Bang has one fine-tuned value that enables other realities to compliment and coalesce into a universe that supports complex life like ourselves.
This expansion rate appears to have no conceivable trial-run debugging phase to achieve error-free function, popping-out at the Big Bang at time t=0.
Murder mystery stories can go in enumerable directions, some of which produce timeless bestsellers and some that do not.
Quantum mechanics now tells us that mathematics can be constrained upfront to limit the solutions coming out of a wave function to enable a universe like ours to pop-out of non-material mathematics into a physical world, at the first moments of the Big Bang.
This requires a Mathematical Physicist in existence prior to the creation of time to put-in the required constraints into the quantum physics equation to actualize a functional universe like this one.
This moderating discretion of choice-making between function reveals “yes” options and “no” rejections that could otherwise have created a life-prohibitive universe.
According now to the most recent science this intelligent choice-making had to be in existence before the Big Bang creation of space and time, putting-in the necessary constraints into the quantum mechanics of the mathematical wave-function equation even before there was such a physical reality of quantum mechanics.
The limitations of large size imposed upon the pursuit of mistake-free efficiency in new housing construction reveals that the foresight of human designers to imagine final outcomes on paper that materialize into physical buildings, still lacks the quality of divinely timeless foresight that could eliminate upfront every crash and collision of building elements competing for the same space.
These crashes and collisions are commonly resolved daily on every new mass-production assembly-line having the feature of an initial, trial-run debugging phase.
 On the Origin of Phyla—Interviews with Dr. James Valentine, by Access Research Network, published on Oct. 22, 2014, on You Tube.
 Moravec’s Paradox – Why are machines so smart, yet so dumb? On Up and Atom published July 8, 2019
 The Return of the God Hypothesis: Interview with Stephen Meyer with Dr. Sean McDowell, May 13, 2020 on You Tube.