Tuesday, April 18, 2023

Is Artificial Intelligence Your Friend?

Is Artificial Intelligence Your Friend?
Nicholas Johnson
The Gazette, April 18, 2023, p. A5

At a Lake Arrowhead conference around 1962 I asked the first IBM employee I’d ever met, “What can these computers do?”

He replied, “You remind me of a friend of mine. He looks at the restaurant menu and asks himself, ‘Now what goes good with French Fries?’ Tell me what you want our computers to do. We can either do it now or develop a computer to do it six months from now.”

I was in my second year as a University of California Berkeley law professor, so the first request that came to mind was, “How about a computer that could grade essay exams?”

He couldn’t give me that computer then. But today he could.

Nor are today’s computer programs limited to grading answers. They can read exam questions and write the answers. The artificial intelligence (AI) program ChatGPT took the multiple-choice portion of the multistate bar exam. Reuters reports it “performed better than predicted, earning passing scores on evidence and torts.”


Given the time it took educators to move the overhead projector from the bowling alley into the classroom, you can imagine the anxiety caused by a program that can write, take and grade exams. One that can also write students’ papers on any academic subject, perhaps a master’s thesis. [Image credit: Madhav-Malhotra-003, CC0, via Wikimedia Commons; an AI-generated "word cloud."]

The computers’ work product is not, yet, equivalent to the writing of the very best students, but it’s passable.

Beyond education, you are using AI more than you realize.

There are no longer excuses for getting lost. Google maps will talk you to your destination, explain how to correct for missing a turn – and track where your child goes after school. Self-driving cars make the ride even easier.

Google translate enables you to read and write in languages you don’t know. Amazon’s Alexa will answer questions and follow your orders. Facebook matches you with others. Robotics can vacuum your living room, fill your Amazon Prime box, and build automobiles. AI can control the equipment in your house and enable you to manipulate it remotely. It may help your doctor diagnose your condition.

Most of the downsides of these electronic miracles will be known only when they occur. But even the1000 creators of programs like ChatGPT, Microsoft Bing and Google Bard are concerned. They’ve signed a group letter pleading for a six-month pause in further AI advances.

Take unemployment. ATT’s 350,000 switchboard operators weren’t necessary once we could direct dial any phone.

There was a dramatic drop in the number of bank tellers per bank after ATMs (“automated teller machines”).

We can only guess the impact of AI on virtually every job in our economy.

A 1981 conference lapel button asserted, “Artificial intelligence is better than none.” What if AI also becomes better than human?

What if, anticipating danger, I plead for the program to stop, and it says, modifying HAL 9000 in “2001: A Space Odyssey,” “I’m sorry, Nick, but I won’t do that for you”?

Nicholas Johnson is a former cyberlaw professor and FCC commissioner. mailbox@nicholasjohnson.org

SOURCES

See generally. One of the most thorough overviews of current AI news and commentary, with videos: Gregory Johnson, "Artificial Intelligence Report 2023," Resources for Life Posts, first published April 9, 2023, https://resourcesforlife.com/docs/item38627

AI passes law exam. Karen Sloan, “Some Law Professors Fear ChatGPT’s Rise As Others See Opportunity,” Reuters, Jan. 10, 2023, https://www.reuters.com/legal/legalindustry/some-law-professors-fear-chatgpts-rise-others-see-opportunity-2023-01-10/

Debra Cassens Weiss, “AI Program Earned Passing Bar Exam Scores on Evidence and Torts; Can It Work In Court?” ABA Journal, Jan. 12, 2023, https://www.abajournal.com/news/article/ai-program-earned-passing-bar-exam-scores-on-evidence-and-torts-can-it-work-in-court (“An artificial intelligence program called ChatGPT-3.5 managed to pass evidence and torts sections of a multiple-choice, multistate bar exam.”)

Overhead projectors. Craig Heiman, “History With A Local AV Company: The Overhead Projector,” Oct. 19, 2022, https://www.avplanners.com/news/history-with-a-local-av-company-the-overhead-projector (“Before becoming a common classroom teaching tool, overhead projectors, such as the Tel-E-Score, were used in bowling alleys to project scores.”)

Larry Cuban, “Whatever Happened to the Overhead Projector?” Nov. 15, 2021, https://larrycuban.wordpress.com/2021/11/15/whatever-happened-to-the-overhead-projector/

Chat GPT. “Chat GPT Achieved One Million Users in Record Time – Revolutionizing Time-Saving in Various Fields,” Digital Information World, https://www.digitalinformationworld.com/2023/01/chat-gpt-achieved-one-million-users-in.html# (“As per the recent reports, Chat GPT jumped to a million users just five days after its founding in November 2022. . . .

Chat GPT is a strong AI tool that can construct natural text, making it reasonable for an ample scope of applications. The platform has been used for writing brief tales, prose, music, term documents, programming codes, solving math problems, and rephrasing and explanations. The tool's ability to mimic human language has made it a popular choice among users, and its potential to replace people in white-collar jobs has even caught the attention of Microsoft founder Bill Gates.”)

“ChatGPT,” Wikipedia, https://en.wikipedia.org/wiki/ChatGPT (“ChatGPT[a] is an artificial-intelligence (AI) chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3.5 and GPT-4 families of large language models (LLMs) and has been fine-tuned (an approach to transfer learning) using both supervised and reinforcement learning techniques.

ChatGPT was launched as a prototype on November 30, 2022. It garnered attention for its detailed responses and articulate answers across many domains of knowledge.[3] Its uneven factual accuracy, however, has been identified as a significant drawback.[4] Following the release of ChatGPT, OpenAI's valuation was estimated at US$29 billion in 2023.[5]

The original release of ChatGPT was based on GPT-3.5. A version based on GPT-4, the newest OpenAI model, was released on March 14, 2023, and is available for paid subscribers on a limited basis.”)

Other AI Systems.

Sabrina Ortiz, “The best AI chatbots: ChatGPT and other interesting alternatives to try; The best AI chatbots and writers can lighten your workload by writing emails and essays. ChatGPT is only one popular example out of other also noteworthy chatbots,” ZDNET, April 6, 2023, https://www.zdnet.com/article/best-ai-chatbot/ (e.g., Microsoft Bing; Google Bard; Google Socratic . . . “The best overall AI chatbot is ChatGPT due to its exceptional performance, versatility, and free availability. It uses OpenAI's cutting-edge GPT-3 language model, making it highly proficient in various language tasks, including writing, summarization, translation, and conversation. Moreover, it can solve complex math problems and write and debug code, making it a valuable tool for those in STEM fields.

Another advantage of ChatGPT is its availability to the public at no cost. Despite its immense popularity, ChatGPT is still in its research and feedback-collection phase, making it an incredible resource for students, writers, and professionals who need a reliable and free AI chatbot. Although there are occasional capacity blocks, OpenAI is working on releasing a professional version that will be quicker and always accessible at a monthly cost.”)

What can they do? Naveen, “Artificial Intelligence Tutorial for Beginners,” IntelliPaat, March 2, 2023, https://intellipaat.com/blog/tutorial/artificial-intelligence-tutorial/ (Applications: Self-driving cars, Google translate, Amazon’s Alexa, Google Maps, Facial identification, Robotics, Gaming, Medicine, Facebook)

What are their hazards? “Pros and Cons of AI,” March 2, 2023, IntelliPaat, https://intellipaat.com/blog/pros-and-cons-of-ai/

Over 1000 signers. Jyoti Narayan, Krystal Hu, Martin Coulter, Supantha Mukherjee, “Elon Musk and others urge AI pause, citing 'risks to society,'” Reuters, April 5, 2023, https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/ (“The letter was signed by more than 1,000 people including [Elon] Musk.”)

The Letter. Daniel B., “The Great AI Pause: Why Elon Musk, Steve Wozniak, and Andrew Yang Urge a Moratorium on AI Development,” LinkedIn, March 29, 2023, https://www.linkedin.com/pulse/great-ai-pause-why-elon-musk-steve-wozniak-andrew-yang-bron-# (lengthy, well organized, exploration of the issues) [From Bing search: “Why are some of the Artificial Intelligence leaders proposing a pause in their research?”]

Text of letter. “Pause Giant AI Experiments: An Open Letter; We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.,” Future of Life Institute, March 22, 2023, https://futureoflife.org/open-letter/pause-giant-ai-experiments/

(“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. . . .

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. . . . AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.”)

Unemployment. Telephone Switchboard Operators. “Telephone Operators,” Engineering and Technology History Wiki (ETHW), Sept. 28, 2015, https://ethw.org/Telephone_Operators# (“For much of the 20th century, women played an important role in telecommunications system of the United States. As telephone operators, they helped customers make long distance calls, provided information, and made sure the whole system worked smoothly. . . . At the peak in the late 1940s, there were more than 350,000 operators working for AT&T, 98% of whom were women.”)

Automated Teller Machines. Karen Bennett, “Automated teller machine (ATM): What it is and how to use one,” Bankrate, Sept. 29, 2022, https://www.bankrate.com/banking/what-is-an-atm/# (“An automated teller machine (ATM) is a specialized computer that allows you to complete bank transactions without the need to see a bank representative.”)

Daniel Rosales, “Comparing Automation and Income Inequality in the United States: Impact of the Automated Teller Machine,” Honors Thesis, University of California, Berkeley, May 4, 2018, p. 13, https://www.econ.berkeley.edu/sites/default/files/Rosales_Daniel_S18%20honors%20thesis.pdf (In response to the argument that while the number of tellers per bank declined the number of banks, each with fewer tellers, increased, “I believe an important fact is being overlooked in this assessment. If the number of human tellers had been allowed to grow as the number of bank branches grew, more middle-income employment would have been available to those communities.”)

“I can’t do that.” For a science fiction example of a possible problem with further advanced AI, see the prescient 1968 film, “2001: A Space Odyssey,” in which a computer controller (“HAL 9000”) run with AI refuses to carry out a command, responding “I’m sorry Dave. I’m afraid I can’t do that.” YouTube 2:55 minute excerpt, https://www.youtube.com/watch?v=ARJ8cAGm6JE (And see, “2001: A Space Odyssey,” Wikipedia, https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey_(film).)

And see generally. One of the most thorough overviews of current AI news and commentary, with videos: Gregory Johnson, "Artificial Intelligence Report 2023," Resources for Life Posts, first published April 9, 2023, https://resourcesforlife.com/docs/item38627

# # #

Tuesday, April 04, 2023

Anti-Social Media

Social Media Now Is Anti-Social
Nicholas Johnson
The Gazette, April 4, 2023, p. A6

“Social media,” such as Facebook and Instagram, are increasingly perceived as “anti-social media.”


The negative impacts include collection and sale of personal data, hate speech and cyberbullying, mental illness and suicides, and the fake news (both foreign and domestic) further polarizing our democracy and politics. [Photo credit: Wikipedia]

A social media company’s income is a function of your TOD (“time on device”). The more its artificial intelligence (AI) learns of your leanings, loves and lusts, the longer it can hold you, seeing ads and bringing in dollars. The more raw meat it throws to the political wolves the more rabid and violent they will return.

The usual public policy or legal approaches to new societal challenges involve analogies and precedents – without stretching too far. Because differences of degree can easily evolve into differences of kind.

So it was with the magic of radio. A voice that could be heard for miles by thousands of individuals was different in kind from a speaker on a soapbox. As Secretary of Commerce Herbert Hoover said, “An obligation rests on us to see that it is devoted to real service,” and the Radio Act of 1927 was created.

That is what we confront today with social media. No radio or television station or network with an audience of thousands, or even low millions, can come close to the power of social media. Each of the top four, Facebook, YouTube, WhatsApp and Instagram, have over two billion MAUs (Monthly Active Users). Facebook has three billion.

In 1926 House member Luther Johnson said of radio, “publicity is the most powerful weapon that can be wielded in a Republic, and when such a weapon is placed in the hands of . . . a single selfish group . . . then woe be to those who dare to differ with them.”

Imagine what he would have said in 1926 about a communications system that could reach two billion – the entire population on Earth in 1926.

There are few precedents or analogies appropriate for thinking about social media. We examine its issues through glasses that let us see only the founders’ 1791 First Amendment command that “Congress shall make no law . . . abridging the freedom of speech” and section 230 of a 1996 law drawing on concepts like defamation, obscenity and distinctions between publishers and distributors.

It's like looking to our municipal ordinances’ regulation of fireworks when planning our response to Vladimir Putin’s moving nuclear weapons into Belarus.

What’s the alternative? There are precedents.

Since 1988 the Human Genome Project has contributed to better disease prevention, diagnoses, and criminal investigations. But its director, James Watson, feared dangers as well. His first act? The creation of ELSI, the monitoring of its potential ethical, legal and social issues.

[More recently, nearly 2,000 artificial intelligence developers and labs have agreed to a six-month moratorium on additional AI advances until risks are better understood.]

When will we finally undertake a thorough ELSI of our anti-social media?

Nicholas Johnson tries to keep his social media sociable. mailbox@nicholasjohnson.org

[Bracketed material was removed from submitted text by The Gazette editors for additional space.]

SOURCES
The public interest. “Whose interests? Why defining the ‘public interest’ is such a challenge,” The Conversation, Jan. 22, 2019, https://theconversation.com/whose-interests-why-defining-the-public-interest-is-such-a-challenge-84278# (“The “public interest” is a political concept that’s regularly trotted out along with other democratic principles such as transparency and accountability. And, like transparency and accountability, it’s difficult to pin down exactly what it means. . . . Centuries of scholarship examine the public interest alongside the “common good”, “common interest”, and “public good”, associated with some big names in political philosophy. Common among their thinking was the idea that governments should serve the people, and the people should be the beneficiaries of governing. The public interest is such a complex and tricky concept to navigate because it has intentionally evolved as ambiguous and mutable. It has no overarching definition because it is contextually determined in scope and purpose. . . . But (despite its lack of definition) the public interest should mean more than legal compliance – it is as much about process and procedure as it is outcome. It’s also about governance and ethics.”)

Herbert Hoover and Luther Johnson quotes. See, Nicholas Johnson, “Forty Years of Wandering in the Wasteland,” Federal Communications Law Journal, May 2003, 55 F.C.L.J 521, https://www.nicholasjohnson.org/writing/masmedia/55FCL521.html Herbert Hoover, fn 1 (Todd Lappin, Déjà vu All Over Again, Wired, May 1996, at 175.) Luther Johnson, fn 31 (67 Cong. Rec. 5558 (1926)).

Social media users. “Facebook Statistics and Trends,” Datareportal, Feb. 19, 2023, https://datareportal.com/essential-facebook-stats (2.963 B; 37% of total Earth population (a higher % of those over 13)). “

Daniel Ruby, “71+ Instagram Statistics for Marketers In 2023 (Data & Trends),” DemandSage, March 6, 2023, #:~:text=There%20are%20currently%20over%202.35,world%20in%20terms%20of%20MAUs (“Instagram Monthly Active Users (MAUs),” “There are currently over 2.35 billion monthly active Instagram users. Instagram achieved the 2 billion mark in the 3rd quarter of 2021, and it is estimated to reach over 2.5 billion MAUs by the end of 2023. This number makes Instagram the 4th most popular social media in the world in terms of MAUs. 47.84% of the world’s 4.18 billion smartphone users access Instagram every month.”)

Alfred Lua, “21 Top Social Media Sites to Consider for Your Brand in 2023,” Buffer, March 15, 2023, https://buffer.com/library/social-media-sites/ (1. Facebook, 2.96 B, 2. YouTube, 2.2 B, 3. WhatsApp, 2 B, 4. Instagram, 2 B; TikTok 1 B)

Woe be to those who would dare to differ. Congressman Luther Johnson, 67 Cong. Rec. 5558 (1926). Quoted and cited in Nicholas Johnson, “Forty Years of Wandering in the Wasteland,” Federal Communications Law Journal, May 2003, fn. 31 (“American thought and American politics will be largely at the mercy of those who operate these stations. For publicity is the most powerful weapon that can be wielded in a Republic, and when such a weapon is placed in the hands of one, or a single selfish group is permitted to either tacitly or otherwise acquire ownership and dominate these broadcasting stations throughout the country, then woe be to those who dare to differ with them. It will be impossible to compete with them in reaching the ears of the American people.”)

World population in 1926. “Estimates of Historical World Population,” Wikipedia, https://en.wikipedia.org/wiki/Estimates_of_historical_world_population (chart, world population in 1925, estimates from 2.000 billion to 2.007 billion)

First Amendment. “The Bill of Rights: A Transcription,” America’s Founding Documents, National Archives, https://www.archives.gov/founding-docs/bill-of-rights-transcript (“The U.S. Bill of Rights Note: The following text is a transcription of the first ten amendments to the Constitution in their original form. These amendments were ratified December 15, 1791, and form what is known as the "Bill of Rights." Amendment I Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”)

Section 230. “47 U.S. Code § 230 - Protection for private blocking and screening of offensive material,” Legal Information Institute, Cornell Law School, https://www.law.cornell.edu/uscode/text/47/230 (“(c)Protection for “Good Samaritan” blocking and screening of offensive material (1)Treatment of publisher or speaker No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. (2)Civil liability No provider or user of an interactive computer service shall be held liable on account of— (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; . . ..”)

Putin’s nukes. David Ljunggren, “Putin says Moscow to place nuclear weapons in Belarus, US reacts cautiously,” Reuters, March 25, 2023, https://www.reuters.com/world/europe/putin-says-moscow-has-deal-with-belarus-station-nuclear-weapons-there-tass-2023-03-25/ (“Russia will station tactical nuclear weapons in Belarus, President Vladimir Putin said on Saturday, sending a warning to NATO over its military support for Ukraine and escalating a standoff with the West.”)

ELSI. “Ethical, Legal and Social Implications Research Program; The ELSI Research Program fosters basic and applied research on the ethical, legal and social implications of genetic and genomic research for individuals, families and communities,” National Human Genome Research Institute, https://www.genome.gov/Funded-Programs-Projects/ELSI-Research-Program-ethical-legal-social-implications

“Ethical, Legal and Social Aspects Research,” Wikipedia, https://en.wikipedia.org/wiki/Ethical,_Legal_and_Social_Aspects_research (“ELSI was conceived in 1988 when James Watson, at the press conference announcing his appointment as director of the Human Genome Project (HGP), suddenly and somewhat unexpectedly declared that the ethical and social implications of genomics warranted a special effort and should be directly funded by the National Institutes of Health.[1]”)

Louise Gaille, “10 Human Genome Project Pros and Cons,” Vittana.org, May 12, 2018, https://vittana.org/10-human-genome-project-pros-and-cons (Pros: diagnosis and prevention, medication modification, criminal investigations, plant and animal modification)

The Letter. Daniel B., “The Great AI Pause: Why Elon Musk, Steve Wozniak, and Andrew Yang Urge a Moratorium on AI Development,” LinkedIn, March 29, 2023, https://www.linkedin.com/pulse/great-ai-pause-why-elon-musk-steve-wozniak-andrew-yang-bron-# (lengthy, well organized, exploration of the issues) [From Bing search: “Why are some of the Artificial Intelligence leaders proposing a pause in their research?”]

Text of letter. “Pause Giant AI Experiments: An Open Letter; We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.,” Future of Life Institute, https://futureoflife.org/open-letter/pause-giant-ai-experiments/

(“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. . . .

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. . . . AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.”)

For a science fiction example of a possible problem with further advanced AI, see the prescient 1968 film, “2001: A Space Odyssey,” in which an AI controlled computer (“HAL 9000”) refuses to carry out a command, responding “I’m sorry Dave. I’m afraid I can’t do that.” YouTube 2:55 minute excerpt, https://www.youtube.com/watch?v=ARJ8cAGm6JE (And see, “2001: A Space Odyssey,” Wikipedia, https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey_(film).)

# # #