Analogizing Digital Technology with Two Types of "Waves":  
One Endangers Privacy; the Other Existentially Threatens Liberty
.

(WrennLaw.Com/Digital_Tsunami.htm)

Digital-Age "Shore-Line Waves" Pose Dangers to Privacy (and, a fortiori, to Attorney-Client Confidentiality and Privilege).

"Shore-line waves" are the types of waves found at a beach.  I use such waves as an analogy to illustrate digital-age dangers to privacy, which a fortiori, pose dangers to attorney-client confidentiality, attorney-client privilege and attorney work-product.  Such "shore-line wave" challenges to our legal-ethics duties are the subjects of most of my CLE seminars focusing on digital-age or high-technology issues. Due to the nature of the challenges posed by such shore-line waves, I believe they are not effectively surmountable on a micro-management basis by each individual lawyer or law firm. Instead, I think a macro solution (imposed by the judicial branch pursuant to its inherent judicial power over the independent administration of justice) could surmount such dangers.   I plan to continually focus on these types of digital-age challenges to legal ethics, so periodically check the listings of my seminars at TRTCLE.Com to find whatever may be my most-recent seminar on the subject.  However, the dangers posed by the soon-to-come "Digital-Age Tsunami" will, I fear, constitute an insurmountable, existential threat to the very concept of liberty writ large.  This latter danger is the focus of this article starting immediately below:

A Digital-Age Tsunami is Coming Soon, Which Will Pose an Existential Threat to Liberty Itself.

All members of our legal profession and judiciary know we are in the most rapidly technologically changing period in all of human history. Most of us, however, do not fully grasp the scale and scope of such changes or the unrelenting increase in the rate of the acceleration of such changes that will continue in the future. Such changes will make the astounding rate of technological changes in the 20th Century seem like slow-walking through a museum of history. Such changes will pose fundamental challenges to our profession's and the judiciary's abilities preserve the fundamental liberties for which our Constitution makes us the guardians. To understand how to do the latter, we must first understand the nature, scope and magnitude of the challenges we will face. What we know about technological changes in recent history can help us better understand the nature, scale, scope and accelerating rate of such challenges.

In an effort to facilitate such understanding, I have (for several years) used a description of the recent history of technological advances that yielded our current, ultra-microscopic hardware providing miniaturization of the kinds of computerized analysis, processing and archiving previously available mere decades ago only on a "mainframe" computer to help illustrate the gigantic scope and magnitude of changes that our soon-to-arrive transition to molecular tools for such purposes will yield. Here is an updated example of a brief historical summary of the technological evolution/revolution bringing us close the time when the tsunami will be upon us.

A good starting point for this brief historical summary is the work of Alan TurinFn-01 in leading the team that created a computer that enabled the British government to break the Nazi code (known as "Enigma") in the midst of World War II.

In 1944, a then-state-of-the-art computer (in the U.S.) named "Electronic Numerical Integrator and Calculator" (ENIAC)Fn-02 had 18,000 vacuum tubes (i.e., 18,000 on/off switches for "0" or "1"). Thus, it was a computer with 18,000 "bits" (or 2,250 "bytes"), and floor-space required for this 18,000-bit (2,250 byte) computer was 20 feet by 40 feet equaling 800 square feet-- approximately the size of a two-car garage today.  [Interesting side-note:  If we were to use today's technology to build a computer having no more computational power than ENIAC, it would be so tiny that the entire computer would be invisible to the human eye and instead visible only through our current technology's most powerful microscope.  Conversely, if we were to be forced to design a modern smart-phone using ENIAC technology, the length of such smart-phone would be greater than the diameter of the Sun.]

Technological advances in the 1950s led to vacuum tubes being replaced by transistors,Fn-03 which were dramatically smaller and which, in comparison to vacuum tubes, operationally required a fraction of the electricity and generated a tiny fraction of the heat.  By 1958, the creation of integrated circuitsFn-04 paved the way for printed circuits using transistors to be replaced by silicon chips.  In the early 1960's (before silicon chips),  NASA was still using humans (mathematical geniuses) for computing orbital mechanics -- Watch the movie Hidden Figures (about NASA's reliance on such geniuses), if you haven't already done so.  Regarding our landing on the moon in 1969, watch the documentary-movie "Apollo 11" if you haven't already done so.  Pay attention to the "computer" warnings (by the computer guiding the lunar-lander for a landing) NASA deemed safe to ignore after which which Neil Armstrong (a test pilot with nerves of steel) calmly took control of the landing away from the computer with little more than a minute before the landing in order to land safely rather than to crash.  (Wrenn-- Useful comparison:  Now, look at your watch.  If it's a digital watch, consider the fact that the computational power of the tiny chip-set inside your watch is much greater than the dramatically larger computer on the lunar-lander from which Armstrong took control.)

Since the invention of the silicon chip in the 1960's, our technology has been able (as predicted by "Moores Law"Fn-05) to double computational power of computers every eighteen months to two years.  By the 1980s, technological advances yielded the birth of the internet and the "World Wide Web" -- See  footnote 5a.Fn-05a     .

 Even though the working parts of today's best silicon chips are visible only through our most powerful microscopes, those parts still constitute what we call "hardware," but our technology is fast approaching the end of "Moore's Law,"Fn-06 after which we will be unable to make the working parts of silicon chips any smaller without such parts losing their structure as microscopic "hardware." Thereafter, to be able to continue increasing computational power (without increasing size), it will be necessary for the "hardware" nature of the working parts to be replaced with a molecular structure -- so that the "memory" and "processing" components will cease to be "hardware" and become "molecular" instead.  In the 1990s, a scientific fraud perpetrated by a young physicist (Hendrik Schön) briefly turned the scientific world upside-down (and led to widespread embarrassment among the worlds foremost physicists and scientific publications) when he convinced the scientific community that he had mastered the transition from hardware memory to molecular memory.Fn-07

Depending on the extent to which DARPA research (or other, private research) may have already designed ways to transition from hardware memory to molecular memory, we may be still waiting for the end of Moore's Law, but when that transition occurs (sooner rather than later in this century), the working parts of our best, fastest computers will be molecular, and because thus far it seems that organic molecules are best suited for that function, it will become relatively easy for such "organic" computers to directly interface with the human brain.

That will bring within reach a long-term goal of what Larry Page, a co-founder of Google, described (in an interview by Stephen Levy in 2004 for his book "In the Plex" chronicling the rise of Google) as the "long term goal" of Google as being: "When you think a question, [Google] will just tell you the answer."Fn-08 .

Several years ago, a computer scientist actually designed a form of headgear with sensors to detect neural activity generated by a human brain engaged in "thinking" and highly sophisticated algorithms to categorize such thus-detected neural activity and convert it into electronic data to be transferred to a search engine to enable a wearer of such headgear to "think a question" and be "told" the answer.Fn-09 

The above-described headgear is non-invasive and is still made with what we understand to be "hardware."  However, in recent years important strides have been made towards designing organic computers, which, by their very nature, will have the capability of directly interfacing with the human brain.  For examples:  In 2012, Stanford's Drew Endy and his lab figured out a way to turn DNA into a rewriteable data storage device that can operate within a cell, and Michio Kaku narrates a video about "tweaking Moore's Law and Computers of the Post-Silicon Era."  In 2014, Israeli scientists used DNA to design molecular wire for computers. A 2015 article predicts DNA-based computers are coming "soon" to replace microscopic-level silicon-chip computers, and a 2017 article reinforces the prediction that DNA computers (organic computers) are "coming soon" to replace silicon-chip computers

Once we've transitioned to organic computers that can directly interface with the human brain, when we then use an implanted version of such device (i.e., an implanted, organic version of a smart-phone) to "think the question" and then be "told the answer," how will it be possible for the thinker to actually know whether the question is, or is not, "his" or "her" own question?

If you think this is outrageous speculation, consider how you can use a smart-phone today:

Assume you have a modern smart-phone and a modern security system at your house which enables you to use your smart phone to control aspects of that security system (and aspects of equipment in your house that may be connected to, or controlled by, that security system) using technology available today. Assume you drive from your home to the airport, board a plane, and fly to Hong Kong for a meeting. When you exit the plane at Hong Kong, you suddenly wonder whether you remembered to close your garage door at home before leaving for the airport. So, with your fingers or voice, you command your smart-phone to query your security system at home to learn whether that door is up or down. You receive an answer indicating it's "up." Then, with your fingers or voice, you command your smart-phone to command your security system to close the door, in response to which the door closes (on the other side of the planet).

Now, assume it's merely a few years into the future (within this century) that you make the same trip to Hong Kong as described above, but instead of having a smart-phone in your hand, you have an organic smart-phone interfaced with your brain. Now, while on the other side of the planet in Hong Kong, you use your thought to ask your security system whether the door is "up," and when you're "told" that it's "up," you use your thought to command the door to close, and it does so (on the other side of the planet).

You may think most people would refuse to have such an organic smart-phone implanted and interfaced with their brains, but think also about the typical news report you see whenever the newest smart-phone is scheduled to become available -- i.e., people lined-up around the block to be the first to get one. And then answer this question: When this kind of technology becomes available, how many people will say "Let everyone else have this God-like power, but don't give it to me"?

Such technological advance will be a Technological Tsunami that will overwhelm and obliterate a key aspect of liberty: Not merely autonomous thinking but also a loss of the ability to even know whether one's thinking is actually autonomous or not.

Because it's our profession's mission to "protect liberty writ large," we have a duty to think (while we still can) about this Tsunami in order to have any hope of somehow influencing society (and science) to develop a defense or effective countermeasures against autonomous thinking becoming extinct.

There's a lot more on the evolution of digital technology at my website here.

________________

Footnotes:

Fn-01. 1940s:  Alan Turing (Alan Turing) (Father of the Modern Computer) (Vacuum Tubes).  Watch the movie Imitation Game if you haven't already done so.  British computer at Bletchley Park.  (Concept:  Turing Machine)  (Video Explanation of "Turing Machine")

Fn-02.  1945:   ENIAC computer (USA) (gigantic; 18,000 Vacuum Tubes) (See video) (See more at Smithsonian)

Fn-03.  1950s:  Transistors replace vacuum tubes.

Fn-04.  1958:    Sept 19, 1958 Integrated circuits (silicon chips) replace transistors

Fn-05.  1965:  Moore's Law:  Progress with silicon chip had begun increasing exponentially.  Prediction by Intel's co-founder, Gordon Moore, that processing power would double (and size fall by half) every two years became known as "Moore's Law" and proved to be true for decades to come.

Fn-05a.  In the 1980s, an advanced network (ARPANET) previously designed by U.S. Military becomes the "internet":  In the early 1980's, use of the ARPANet began to expand to include commercial activity serving the needs of advanced research as well as needs of the Department of Defense. (Later, DARPA-- i.e., the Defense Advanced Research Project Agency-- took over aspects previously handled only by ARPANet.)  Such expansion intensified needs for protocols for interoperability ("Interop") and prompted collaborative activities to satisfy such needs. As interoperability between ARPANet and other networks increased, it became a network between and among networks (i.e., an "internet").  "By the middle of the 80's there were ARPANET gateways to external networks across North America, Europe, and in Australia, and the Internet was global in scope." Marty Lyons has created a [hand-drawn] map of the existing network gateways from 18 June 1985."  In 1985 the NSF created the NSFNET to interactively link the super-computing centers of CSNET (which NSF had created in 1981) thus comprising "the first large-scale implementation of Internet technologies in a complex environment of many independently operated networks," with interoperability among them being founded on the concept envisioned by Paul Baran.  In 1985, NSF considered how it could provide greater access to the high-end computing resources at its recently established supercomputer centers. Because NSF intended the supercomputers to be shared by scientists and engineers around the country, any viable solution had to link many research universities to the centers.  NSFNET went online in 1986 and connected the supercomputer centers at 56,000 bits per second—the speed of a typical computer modem today. In a short time, the network became congested and, by 1988, its links were upgraded to 1.5 megabits per second. A variety of regional research and education networks, supported in part by NSF, were connected to the NSFNET backbone, thus extending the Internet’s reach throughout the United States.  Creation of NSFNET was an intellectual leap. It was the first large-scale implementation of Internet technologies in a complex environment of many independently operated networks. NSFNET forced the Internet community to iron out technical issues arising from the rapidly increasing number of computers and address many practical details of operations, management and conformance.  (Source for above) Such mid-1980's growth in needs for interoperability prompted collaborative activities leading to the first Interop "trade show" in 1988. By this time, this network extended throughout the United States (and for research and some governmental purposes also extended to research centers outside the U.S.). This stimulated expansion of the Internet to include private and commercial usage in additional to research, governmental and government-contracting uses. quotation.) 1989:  Invention of "World Wide Web" to enhance functionality of the "internet" invented by the U.S.A:  In 1989, Tim Berners-Lee and Robert Cailliau (scientists at CERN in Switzerland) began designing a proposal for ways to enhance the functionality of such network. Building on the concept of hypertext functionality invented by Doug Engelbart's "mouse," they completed the project by designing "HyperText Markup Language" (html) as programming code enhancing the functionality of the Internet by improving the ways computers could assemble, organize and display data and enable human interaction therewith. When they completed this project in 1990 they named it the "World Wide Web (or "WWW" or "www"). (More at http://www.w3.org/Consortium/history.html.)

Fn-06.  1990s:  Fears Rising About the Inevitable End of Moore's Law:   In the 1990's computer-scientists world-wide were concerned that computer technology was fast approaching the end of "Moore's Law" -- i.e., that technology could no longer make it feasible to continue to decrease by half the size of  hardware-memory (silicon chips) and thus be unable to continue doubling computational power every two years.  There was a worldwide race among computer scientists and physicists to develop a way to replace hardware memory (silicon chips) with a molecular form of memory to overcome the end of Moore's Law.

Fn-07.  2002:  Exposé of scientific scandal (described immediately above):  Virtually-unanimous-scientific-consensus of Nobel-Prize-worthiness of scientific claim of "molecular-chip" solution to the soon-approaching end of "Moore's Law" (exponential increases in silicon-chip computational power)-- See documentary video:  Dark Secret of Hendrik Schön .   

What was the scandal?  

The "holy grail" being sought by computer engineers world-wide was how to design molecular memory to replace  microscopic memory. To do so would enable a gigantic leap from today's micro-chip technology to molecular "chip" technology that would be much greater than the leap from radio tubes directly to micro-chips. 

Jan Hendrik Schön, a brilliant physicist already widely respected in his field, claimed to have invented/designed a workable  system for molecular memory. Nature, the world's most prestigious scientific periodical for peer-review and publication of scientific  discoveries and knowledge, published (and, in doing so, lauded) three articles by Schön explaining his design of a working system for  molecular memory. For years, world-wide scientific "consensus" accepted his work as valid. Scientific consensus was that it was not whether, but merely when, Schön would receive the Nobel Prize.

Then a graduate student working late into the early-morning hours on a research assignment discovered something strange about  Schön's three articles: Three graphs (one in each article) purporting to show exquisitely detailed graphical plots of raw data (not regression analyses) from three  experiments purported to have been conducted years apart were – down to the most minor detail—identical, which would be a  scientific impossibility.

Although academia-pecking-order types of roadblocks impeded her efforts to expose the fraud at first, it wasn't too long before Nature magazine was trying to smile while wiping the three (count 'em, three) eggs off its face. Ultimately, of course, the young  graduate student's work served to expose the fraud the scientific community of experts had so eagerly and readily embraced by  "consensus" as true (without conducting any scientifically rigorous investigation). 

Why did so many eminent scientists accept his "work" so uncritically? It seems their emotional investment in the desire for it to be  true was so strong that it overpowered their training in scientific skepticism -- See  Scientific Scandal:  Scientific Consensus Completely Fooled by Bogus Scientific Claim of Achieving Molecular Memory to Overcome Moore's Law.  Does this seem familiar (as in the current, uncritical, politically-enforced embrace of Anthropogenic Global Warming as though it were scientific fact rather than a yet unproven hypothesis)? Watch Dark Secret of Hendrick Schön:  View entire video:  [here] [here] [here] or [here].  

Why were the most prestigious physicists and scientific publications so eager to believe Schön's claims and thus blind to signs of fraud?  Here's the link to the starting point of video explanation:  [explanation of need to overcome Moore's Law Here's a link to the scientific investigation that ultimately exposed the fraud:  [Investigative Report by Bell Labs in 2002]   BBC information about the documentary:  [Archived Information here]  Alternate links to video-documentary Dark Secret of Hendrick Schön:  entire video:  [here] [here] [here] or [here] or beginning with explanation of "Moore's Law":  [here].  

Post-Scandal -- In the wake of the scandal, scientists worldwide intensified their efforts to develop/create workable molecular-level replacements for microscopic-level "chips."  (Wouldn't it be nice if the currently political scientific community would apply the same intellectual rigor to what is currently being touted as "the science" of AGW?

Fn-08.  (Bold-Italics added) This is a quotation of Google co-founder, Larry Page, from Steven Levy's book, In the Plex quoting Page in Levy's 2004 interview of Page and his fellow Google co-founder, Serge Brin. Click here for the WaybackMachine link to that quotation. Click here for a subsequent C/Net article quoting Levy's quotation of Page. . If simply clicking the link above doesn't work for you, copy/paste this url into your internet browser:  http://web.archive.org/web/20151212182539/http://www.cnet.com/news/at-15-googles-ambitions-remain-unbridled/.

Fn-09.  2018:  Think-the-Question, Hear the Answer from Google via Skull Vibrations.  [here or here] (April 4, 2018) Although this appears to suggest that the technological means for virtually destroying free will has already arrived, that's not the case because under this particular procedure, the means for technological detection of the question is still externalized via use of an external device fitting on the outside of the skull, so it would still be possible for the questioner to choose the question, but at the speed at which organic versions of technology are advancing, the time of feasibility of an organic "implant" is in the near, not distant, future.  What is likely to happen and when?   Keep thinking that question as you continue reading to the end (or, if you're too impatient, click this and then come-back here). 

----------------

Fn-10.  2012:  Turning DNA into a hard drive:  Stanford's Drew Endy and his lab figured out a way to turn DNA into a rewriteable data storage device that can operate within a cell.  (Click the link for the entire article.)

Fn-11.  2012:  Tweaking Moore's Law and Computers of the Post-Silicon Era (Michio Kaku) .

Fn-12.  2014:  Israeli Scientists Use DNA to Design Molecular-Wire for Computers.

Fn-13.  2015:  Prediction:  DNA-Based Computers Soon to Replace Microscopic-Leven Silicon Chips.

Fn-14.  2017:  Prediction:  DNA-Computer (Organic Computer) Coming Soon

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

...........