The Big Picture –
By Glynn Wilson –
WASHINGTON, D.C. — It may be too little too late, or too little too soon, but at least the Biden administration is trying to do something to try keeping people grounded in reality.
What are the Christo Fascists in the House doing other than trying to push the United States of American back into the Dark Ages?
On this Halloween 2023, how important is reality to you? Where is the line between enjoying fictitious fun on a holiday, and seeing factual reporting on important issues involving the very real struggles of life and death facing society?
Would you rely on a comic book, or a film series based on one, to plan your life?
Hey, not being a fan of Hollywood horror movies, I tend to gravitate to more intellectual projects, where the scary fiction actually has something to say about the realities of life. So for my Halloween streaming viewing experience, I chose “The Sandman,” a DC comic based series written by Neil Gaiman that came out on Netflix in 2022.
To quote the official setup, “There is another world that waits for all of us when we close our eyes and sleep — a place called the Dreaming, where The Sandman, Master of Dreams played by Tom Sturridge, gives shape to all of our deepest fears and fantasies. But when Dream is unexpectedly captured and held prisoner for a century, his absence sets off a series of events that will change both the dreaming and waking worlds forever. To restore order, Dream must journey across different worlds and timelines to mend the mistakes he’s made during his vast existence, revisiting old friends and foes, and meeting new entities — both cosmic and human — along the way.”
I found the depiction of hell in Part 4 fairly compelling, compared to other fictional accounts. But in Part 4, when a psychiatric hospital escapee named John runs off with the powerful ruby of Morpheus and experiments with its power in altering reality in a local diner, a truth about human reality is revealed. He sets out to use the power of the ruby to “make the world a better place, a more honest place.”
But by making people forgo the nicety of little white lies and tell the unvarnished truth, people just go crazy and start arguing and end up killing each other, or themselves.
Maybe that’s what’s wrong with the world today, where somehow we’ve managed to turn a once great country into the home of suicide shooters, killing others and themselves at an alarming rate. This is what the Russian media used to say America was like back in the 1980s. Somehow it has come true, thanks to unregulated social media and the destabilizing propaganda of Donald Trump.
But back to the effort of this government to try to regulate the new technology. It may seem boring as news, but it’s something people should be paying attention to.
Governments the world over are facing growing pressure to do SOMETHING about run away technology for the past several years.
As the New York Times put it in its reporting this week, “How do you regulate something that has the potential to both help and harm people, that touches every sector of the economy and that is changing so quickly even the experts can’t keep up?”
“Regulate A.I. too slowly and you might miss out on the chance to prevent potential hazards and dangerous misuses of the technology. React too quickly and you risk writing bad or harmful rules, stifling innovation or ending up in a position like the European Union…”
Probably obscured by all the other sensational news about the war in Israel and more domestic mass shootings, The White House announced its attempt to govern the fast-moving world of Machine Learning (ML) with a sweeping executive order that imposes new rules on companies and directs federal agencies to begin putting guardrails around the technology.
In the absence of any regulatory attempt to address the problem by Congress, President Biden signed an executive order putting some modest rules in place and signaling that the federal government intends to keep a close eye on Big Tech’s so-called “Artificial Intelligence” or AI going forward.
We choose not to call it that, and will adopt Noam Chomsky’s language. We call it Machine Learning, or ML.
As the Times points out, social media was allowed to grow unimpeded for more than a decade before regulators showed any interest in it. In retrospect, that was a big mistake.
The executive order runs more than 100 pages.
Most noteworthy, companies that make the largest ML systems will be required to notify the government and share the results of their safety testing before releasing their models to the public.
The requirements will be enforced through the Defense Production Act, a 1950 law that gives the president broad authority to compel U.S. companies to support efforts deemed important for national security. That could give the rules teeth.
In addition, the order will require cloud providers that rent computers to ML developers, including Microsoft, Google and Amazon, to tell the government about their foreign customers. And it instructs the National Institute of Standards and Technology to come up with standardized tests to measure the performance and safety of these models.
The executive order also contains some provisions that will please the AI ethics crowd, according to the Times, a group of activists and researchers who worry about near-term harms from the technology, such as bias and discrimination, and who think that long-term fears of human extinction from the technology are overblown.
We are less worried about humans becoming extinct than the threat of news reporters becoming extinct, which has major implications for democracy.
The executive order directs federal agencies to take steps to prevent these algorithms from being used to exacerbate discrimination in housing, federal benefits programs and the criminal justice system. It directs the Commerce Department to come up with guidance for watermarking AI-generated content, which could help crack down on the spread of artificially generated misinformation.
Executives of the companies pioneering this technology seem relieved that the White House’s order stopped short of requiring them to register for a license in order to train the models, and does not require them to pull any of their current products off the market or force them to disclose the kinds of information they have been seeking to keep private, such as the size of their models and the methods used to train them.
It also doesn’t try to curb the use of copyrighted data in training the models — a common practice that has come under attack from artists and other creative workers in recent months and is being litigated in the courts.
Tech companies will also benefit from the order’s attempts to loosen immigration restrictions and streamline the visa process for workers with specialized expertise in the technology.
Hard-line safety activists may wish that the White House had placed stricter limits around the use of large the models, or that it had blocked the development of open-source models, whose code can be freely downloaded and used by anyone.
“But the executive order seems to strike a careful balance between pragmatism and caution, and in the absence of congressional action to pass comprehensive A.I. regulations into law, it seems like the clearest guardrails we’re likely to get for the foreseeable future,” the Times concludes.
A former New York Times writer who now writes for The Washington Post expressed views this week that more closely align with my own. That is, it’s become clear to me that Google is an evil empire, and the federal government would do well to regulate the hell out of it.
Google Pixel’s ad campaign is destroying humanity
“If you’ve just traveled here in a time machine from some distant, dystopian future to answer the question of when and where humanity went tragically wrong in the early part of the 21st century, then I suppose I could direct you to nationalism or war or rising oceans, any one of which might explain our imminent ruin,” writes Matt Bai. “But let me direct you, instead, to the ad campaign for the newest Google smartphone, which encapsulates all the horrors of our moment and gleefully promises to make them worse.”
He is way too nice to Google for the most part, but he makes a good point.
“If you want to muddy the line between truth and invention in your Instagram feed, this is the phone for you,” he writes. “The message here is unmistakable. Don’t be a prisoner to unsatisfying reality. Just make it whatever you want it to be.”
Our society continues to struggle with the dawn of a new age of misinformation, he says, including “deepfakes, digital impostors (and) foreign bots. But here’s Google, making it not only easy but also glamorous to clandestinely alter the moments of your life and share them with all your friends.”
While our politics reels from a crisis of self-certainty, where entire communities believe what they want to believe and disregard any evidence to the contrary, he says, “No worries! Here’s Google to help you wall yourself off in a world of conspiracy and make-believe, where the only memories worth keeping are the ones that present the world as you’d like it to be, rather than as it is.”
Our children suffer from the cruelty of Darwinian social media, he continues, where bullying and exclusion traumatize the less socially adept.
“Thank heaven for the Pixel phone, which makes it simple to eliminate that irritating photo-bomber before you post. Why whisper meanly about the uncool kid when you can erase her altogether with the swipe of a finger?”
He goes on, but it comes down to this.
“… let’s be clear-eyed about the crisis at hand. We are waging a war right now to defend the very concept of truth from those who would obliterate it. Beset by the growing capability of AI and online disinformation, people are rapidly losing faith in the notion of objective reality. What seems to them unlikely or undesirable becomes, literally, unbelievable.”
It matters how we portray truth, he says.
“… to make the explicit selling point of that phone the notion that imperfect truths don’t need to exist anymore — that what’s real is both fungible and subjective — strikes me as reckless. It romanticizes the most destabilizing trend in society and invites us all to revel in it.”
Following up on the executive order for the “Responsible Development of Artificial Intelligence,” the U.S. Department of Homeland Security sent out a press release about it by email. We just run this in its entirety for your edification.
On October 30, 2023, President Biden issued a landmark Executive Order to promote the safe, secure, and trustworthy development and use of artificial intelligence (AI). The Biden-Harris Administration places the highest urgency on governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, federal government-wide approach to doing so. The Department of Homeland Security (DHS) will play a critical part in ensuring that AI use is safe and secure nation-wide. DHS’s own use of AI will be achieved responsibly, while advancing equity and appropriately safeguarding privacy, civil rights, and civil liberties.
The direction provided in the EO is consistent with DHS’ innovative work in ensuring the safe, secure, and responsible use and development of AI. DHS will manage AI in critical infrastructure and cyberspace, promote the adoption of AI safety standards globally, reduce the risks that AI can be used to create weapons of mass destruction (WMD), combat AI-related intellectual property theft, and help the United States attract and retain skilled talent. The EO follows on DHS’s work deploying AI responsibly to advance its missions for the benefit of the American people.
Managing AI in Critical Infrastructure and Cyberspace
Advances in AI will revolutionize the operation of critical infrastructure operations and ultimately the delivery of services upon which Americans rely daily. But it will also present new and novel risks. To protect U.S. networks and critical infrastructure, the President has directed DHS to take several steps to help govern the safe and responsible development and use of AI.
First, the President has directed Secretary of Homeland Security Alejandro N. Mayorkas to establish and chair an AI Safety and Security Advisory Board (AISSB) to support the responsible development of AI. This committee will bring together preeminent industry experts from AI hardware and software companies, leading research labs, critical infrastructure entities, and the U.S. government. This AISSB will issue recommendations and best practices for an array of AI use cases to ensure AI deployments are secure and resilient.
Second, DHS will work with stakeholders inside and outside of government to develop AI safety and security guidance for use by critical infrastructure owners and operators. The Cybersecurity and Infrastructure Security Agency (CISA) is assessing potential risks related to the use of AI in critical infrastructure sectors, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyberattacks. We will also take a global, harmonized approach by working with international partners on these guidelines.
Finally, DHS will capitalize on AI’s potential to improve U.S. cyber defense. CISA is actively leveraging AI and machine learning (ML) tools for threat detection, prevention, and vulnerability assessments. Furthermore, DHS will conduct an operational test to evaluate AI-enabled vulnerability discovery and remediation techniques for federal civilian government systems.
Reducing Risks at the Intersection of AI and Chemical, Biological, Radiological, and Nuclear Threats
The advent of AI may make it easier for malicious actors to develop WMD. Of particular concern is the risk of AI-enabled misuse of synthetic nucleic acids to create biological weapons. To mitigate the risk, DHS will work with the White House Office of Science & Technology Policy and other relevant U.S. government agencies to evaluate the potential for AI to lower the barriers to entry for developing WMD. Furthermore, DHS will develop a framework to evaluate and stress test synthetic-nucleic acid screening, creating a standardized set of expectations for third parties that audit AI systems for misuse and prevent the risk of abuse and proliferation by malicious actors.
Combatting AI-related Intellectual Property Theft
Protecting AI intellectual property (IP) is critical to U.S. global competitiveness. IP theft threatens U.S. businesses, impacts American jobs, and negatively effects our national security. To address this challenge, DHS, through the National Intellectual Property Rights Coordination Center, will create a program to help AI developers mitigate AI-related IP risks, leveraging Homeland Security Investigations (HSI), law enforcement, and industry partnerships. DHS will also contribute to the Intellectual Property Enforcement Coordinator Joint Strategic Plan on Intellectual Property Enforcement.
Attracting and Retaining Talent in AI and other Critical Emerging Technologies
Cultivating talent in AI and other emerging technologies is critical to U.S. global competitiveness. To ensure that the United States can attract and retain this top talent, DHS will streamline processing times of petitions and applications for noncitizens who seek to travel to the United States to work on, study, or conduct research in AI or other critical and emerging technologies. DHS will also clarify and modernize immigration pathways for such experts, including those for O-1A and EB-1 noncitizens of extraordinary ability; EB-2 advanced-degree holders and noncitizens of exceptional ability; and startup founders using the International Entrepreneur Rule.
DHS has already advanced policy consistent with direction in the EO:
On October 20, 2023, S. Citizenship and Immigration Services (USCIS) published a Notice of Proposed Rulemaking to modernize the H-1B specialty occupation worker program and enhance its integrity and usage; USCIS continues to work on rulemaking to enhance the process for noncitizens, including experts in AI and other critical and emerging technologies and their spouses, dependents, and children, to adjust their status to lawful permanent resident.
On September 12, 2023, USCIS clarified guidance on evidence for EB-1 individuals of extraordinary ability or outstanding professors or researchers.
DHS Leads in the Responsible Use of AI
AI is already delivering significant value across DHS, and it will only become more significant to every part of our operations in the years to come.
Concrete examples of where DHS is already seeing benefits from AI include the following:
Fentanyl Interdiction: S. Customs and Border Protection (CBP) uses a ML model to identify potentially suspicious patterns in vehicle-crossing history. Recently, CBP used the model to flag a car for secondary review, which yielded the discovery of over 75 kgs of drugs hidden in the automobile.
Combatting Online Child Sex Abuse: Recently, HSI Operation Renewed Hope identified 311 previously unknown victims of sexual exploitation thanks in part to a ML model that enhanced older images to provide investigators with new leads.
Assessing Disaster Damage: The Federal Emergency Management Agency (FEMA) uses AI to assess damage to homes, buildings, and other property after a disaster more efficiently. Using ML, data from past incidents, as well as pre-disaster imagery, FEMA can classify different levels of damage. During disasters, FEMA uses the output from the ML model to significantly reduce the number of impacted structures that need to be physically reviewed in-person for damage. This allows FEMA’s analysts to process images in days, as opposed to weeks, and gets disaster assistance to survivors that much faster.
While these examples focus on border security, investigations, and disaster response, every DHS Agency and Office is working to responsibly integrate AI, harnessing its potential to further improve DHS operations for the benefit of the American people.
Protecting Civil Rights, Civil Liberties and Privacy
DHS maintains a clear set of principles and robust governance that prioritizes the protection of civil rights, civil liberties, and privacy. The Department’s approach is the foundation for its work to ensure AI is used responsibly across DHS’s unique missions. DHS policy outlines the Department’s commitment to lean forward in deploying AI tools to enhance operations and lead the government in the responsible and ethical use of AI, ensuring the acquisition and use of AI in a manner that is consistent with the U.S. Constitution and all other applicable laws and policies. Among other commitments, DHS will not collect, use, or disseminate data used in AI activities or establish AI-enabled systems that make, or support, decisions based on the inappropriate consideration of race, ethnicity, gender, religion, gender, sexual orientation, gender identity, age, medical condition, or disability.
The Department’s governance and oversight for the responsible use of AI is a closely coordinated, highly collaborative effort that unites operational and business-process stakeholders from across the Department around the common goal of ensuring responsible use. In April 2023, Secretary Mayorkas established the Department’s first Artificial Intelligence Task Force to drive specific applications of AI to advance critical homeland security missions.
The DHS AI Task Force includes a Responsible Use Group, led by the Officer for Civil Rights and Civil Liberties, which is developing tailored approaches to provide guidance, risk assessment, mitigation strategies, and oversight for the protection of individual rights in projects championed by the DHS AI Task Force. An AI Policy Working Group coordinates work to affect Departmental policy change and apply oversight to all DHS AI activities through collaboration among the Office of the Chief Information Officer, Science and Technology Directorate, Office of the Chief Procurement Officer, Office for Civil Rights and Civil Liberties, the Privacy Office, and the Office of Strategy, Policy, and Plans.
___
If you support truth in reporting with no paywall, and fearless writing with no popup ads or sponsored content, consider making a contribution today with GoFundMe or Patreon or PayPal. We just tell it like it is, no sensational clickbait or pretentious BS.
Before you continue, I’d like to ask if you could support our independent journalism as we head into one of the most critical news periods of our time in 2024.
The New American Journal is deeply dedicated to uncovering the escalating threats to our democracy and holding those in power accountable. With a turbulent presidential race and the possibility of an even more extreme Trump presidency on the horizon, the need for independent, credible journalism that emphasizes the importance of the upcoming election for our nation and planet has never been greater.
However, a small group of billionaire owners control a significant portion of the information that reaches the public. We are different. We don’t have a billionaire owner or shareholders. Our journalism is created to serve the public interest, not to generate profit. Unlike much of the U.S. media, which often falls into the trap of false equivalence in the name of neutrality, we strive to highlight the lies of powerful individuals and institutions, showing how misinformation and demagoguery can harm democracy.
Our journalists provide context, investigate, and bring to light the critical stories of our time, from election integrity threats to the worsening climate crisis and complex international conflicts. As a news organization with a strong voice, we offer a unique, outsider perspective that is often missing in American media.
Thanks to our unique reader-supported model, you can access the New American journal without encountering a paywall. This is possible because of readers like you. Your support keeps us independent, free from external influences, and accessible to everyone, regardless of their ability to pay for news.
Please help if you can.
American journalists need your help more than ever as forces amass against the free press and democracy itself. We must not let the crypto-fascists and the AI bots take over.
See the latest GoFundMe campaign here.
Don't forget to listen to the new song and video.
Just because we are not featured on cable TV news talk shows, or TikTok videos, does not mean we are not getting out there in search engines and social media sites. We consistently get over a million hits a month.
Click to Advertise Here