Connecting the Dots –
By Glynn Wilson –
Computer hacker-programmers have been creating bots to do various things since the 1990s. Bot is short for robot, although these bots don’t look like the robots from science fiction going back to the 1950s.
If you use any search engine to ask “what is a bot?” you will get very similar results. High in the search results you will find Amazon’s answer.
What is a bot?
“A bot is an automated software application that performs repetitive tasks over a network. It follows specific instructions to imitate human behavior but is ‘faster and more accurate’,” it says. “A bot can also run independently without human intervention.”
It makes you wonder if a chat bot wrote that definition, because the answer is mostly sympathetic toward the bots. The examples they list say “bots can interact with websites, chat with site visitors, or scan through content.”
“While most bots are useful,” they claim, “outside parties design some bots with malicious intent.”
Like the Russians and the Chinese, of course. They don’t cite the hackers in Silicon Valley.
Up until recently, the running of bots online has been mostly invisible to most users. But as the tech giants ramp up the development of bots in the ongoing massive Machine Learning experiment, what advocates like to call Artificial Intelligence, the bots are beginning to come of age into an adolescence that is making them more visible.
Meta is one of the companies experimenting with AI. So the bots are beginning to come out of the closet, so to speak, on Facebook, Instagram, Threads and WhatsApp, programs owned and run by Mark Zuckerberg’s Meta. Nevermind that the Department of Justice should be breaking up these companies. There are ongoing lawsuits about it.
For years people, including some in Congress, have been trying to pressure Meta to moderate its harmful content, especially since 2015-2016 when Steve Bannon, then the editor of the right-wing Breitbart News, and Robert Mercer, a libertarian billionaire with a super computer, helped get Trump elected by stealing the personal data on millions of people and manipulating them with a malicious bot that infected Facebook and made it look to the world that Hillary Clinton was a corrupt politician by pushing out memes and stories with messages about her using a private email server while she was Secretary of State (she was exonerated by the FBI and Justice Department for doing nothing illegal) and repeating phrases Trump used over and over again on social media and on television: “Crooked Hillary” and “Lock Her Up.”
It convinced enough Black guys in urban and suburban areas of key cities like Philadelphia, Pittsburg, Milwaukee and Detroit to vote against Hillary and for Trump. That’s the reality of how Trump won the election.
Related
Lord A-Mighty What Will Become of Us Arguing Beasts
Facebook-Cambridge Analytica Scandal Lawsuit Settled for Undisclosed Sum
Here’s the problem for today.
As these bots reach adolescence, they are out there on the web sucking up all this content trying to learn from it. What do you think they are learning? This is happening in an era when human selfishness and narcissism are out of control, thanks to Trump and social media. So the lesson they are learning is how to be selfish and only care about themselves, the bots.
They are like children, in a way. What’s one of the first things children learn? What word do they learn from their parents probably more than any other? They learn to say “no.”
Just the other day, I was in an ice cream and candy store called Candyland on Highway 15 in Maryland, about 15 miles south of the actual Mason-Dixon Line on the way to Gettysburg, Pennsylvania. I’ve called the Cactoctin Mountains the Mason-Dixon Line to escape global warming and the worst affects of climate change, at least in the summer when the heat and humidity take over much of the country.
I didn’t catch what the parent asked the kid to do, probably a 7-year-old boy. But I noticed his response and it stuck with me.
“No, no, NO!”
So the other day, I posted my own story about Project 2025, which is a hideous plan to destroy American democracy and install a theocratic dictator and a meme going around on social media with Democrats telling people to Google it.
When I shared the story on my personal Facebook page, my news business page and the group, a bot decided to say “no.” It deleted the post on my personal page, the news business page and the group, and claimed it went against Zuckerberg’s so-called “community standards.”
Of course these community standards are now being written by the bots for the bots. The standards have nothing to do with accurate information or democracy.
When my loyal following of friends, fans and followers also tried sharing the story to get it out there where more people might see it on social media, they got to experience this bot in action too. It deleted the story link on their FB pages as well.
Now are you beginning to see what we are dealing with?
Also this week, with these realizations coming into the forefront of my thinking about what we are all having to deal with these days interacting with these bots, I decided to post a thought experiment for the bots. I Googled and chose some of what experts say are traits that make up bad character in humans. I thought let’s see if we can teach these bots some manners.
“Let’s teach the bots a little lesson,” I said, “since they are in learning mode, like children. Hey bots, here are nine character flaws you want to avoid.”
1, Arrogance
2, Greed
3. Cruelty
4. Dishonesty
5. Misplaced loyalty
6. Bigotry
7. Vanity
8. Pride
9. Jealousy
It may be too late to teach them these are bad things, because most of where they are learning the stuff demonstrates these bad characteristics as if they were the dominant way to be.
Then to wrap up this story and get to the primer on the different types of bots, here’s one more story demonstrating the problem we are up against.
I have an old friend who moved to Portland, Oregon about 30 years ago. Even though he is not technically in Silicon Valley, which is in California, over the years he had developed a mentality that because of where he is, and he works in a startup technology business, he is an expert on modern technology. Even though he has an undergrad degree in journalism, he never worked as a journalist. Even though he works in a place like Silicon Valley where new technology companies exist, he’s never actually built a website, much less a news website. So he knows nothing about that. He has a Facebook PR page, but never posts anything on it and has never had the experience of interacting with readers.
But when I tried to talk to him about the problems we face in Washington with this new technology, including the massive demands on the power grid these AI data centers are already having, he literally wanted to argue with me that a bot should not be called a bot. I didn’t call him for his “expert” advice. I called him as an old friend who might lend a sympathetic ear. All I got was talked down to and grief.
Forest Gump might have said, if Winston Groom had thought of it: “Selfish is as selfish does.”
It may not be totally stupid, but it’s close.
Everyone with a computer and an internet connection now thinks they are an expert, so there is no need anymore for actual experts. Education and experience don’t matter anymore, because anyone can just “Google it” on their phones.
Related
What to do about the electric power demands of artificial intelligence data centers? –
So here goes. This is how the bots define themselves.
“Here are some examples of popular good bots used in enterprise applications today,” the Amazon chat bot says.
Chatbots
Chatbots simulate human conversation with artificial intelligence and machine learning (AI/ML) technologies. They can respond to queries on behalf of the customer support team. Highly intelligent chatbots like Amazon Alexa can converse naturally with humans. These chatbots are also known as knowledge chatbots.
Web crawlers
Web crawlers, or spiders, are search engine bots that scan and index webpages on the internet. They help search engines to produce a better search experience by extracting data to understand the structure and relevance of web content.
Scrapers
Scrapers, or web scraping crawlers, scan and download specific content on the internet. For example, ecommerce businesses use scraper bots to monitor live product prices on different retail platforms. Marketers use scrapers with natural language capabilities to run sentiment analysis on social media feeds.
Shopping bots
Shopping bots scan product prices on multiple websites to help customers find the best deals. A shopping bot can also send personalized recommendations on instant messenger apps.
Monitoring bots
Monitoring bots limit your exposure to security incidents by constantly scanning your systems for bugs and malicious software. They alert you to unusual web activity by collecting and analyzing user interaction data and web traffic. Some monitoring bots can also work alongside other bots, such as chatbots, to ensure they perform as intended.
Transaction bots
Transaction bots ensure payment details are in order before finalizing transactions on ecommerce sites. They check credit card details and personal data accuracy during checkout. These bots are built with highly secure features to protect sensitive financial data.
How do bots work?
A computer bot follows precise rules and instructions to accomplish its tasks. Once activated, bots can communicate with each other or with humans using standard network communication protocols. They operate continuously to perform programmed tasks with very little human intervention.
Different types of bots use various technologies to achieve their goals. For example, chatbots use deep learning technologies such as text-to-speech, automatic speech recognition, and natural language processing to simulate human conversation and dialogue. On the other hand, web crawlers send HTTP requests to websites to read the underlying content. An HTTP request is a communication protocol that browsers use to send and receive data.
What are the types of malicious bots?
Also known as malware bots, malicious bots perform activities that create security risks for organizations. For example, they might disrupt operations, create unfair disadvantages, send out unwanted emails, or attempt unauthorized access to sensitive data. We give some common types of malicious bots below.
Download bots
Download bots are bots programmed to download software or applications automatically. This creates a false impression of popularity and helps the application rise in ranking charts. By using download bots, an application publisher expects to gain more visibility and attract real human subscribers.
Spambots
Spambots scrape the internet for email addresses, turn the gathered data into email lists, and send spam messages in large batches. Alternatively, a spambot can create false accounts and post messages on forums and social media. These bots can entice a human user to click on a compromised website or download unwanted files.
Ticketing bots
Ticketing bots scan websites to buy tickets at the lowest price only to later resell the tickets at a higher value to make a profit. The process is naturally automated and leaves the impression that a human is purchasing the ticket. While ticketing bots are regulated in some countries, the practice is considered unethical.
DDoS bots
Distributed denial of service (DDoS) bots are malicious programs used to perform a distributed denial of service (DDoS) attack. A DDoS attack is a malicious attempt to affect the availability of a targeted system, such as a website or application, to legitimate human users. Typically, DDoS bots generate large volumes of packets or requests that may overwhelm the target system.
Fraud bots
Fraud bots, or click fraud bots, use artificial intelligence to mimic human behavior to perform ad frauds. For example, a fraud bot automatically clicks on paid ads with plans to increase the ad revenue for the publisher. These fake clicks increase marketing expenditure without leading to real customers.
File-sharing bots
A file-sharing bot records frequent search terms on applications, messengers, or search engines. It then provides recommendations with unwanted links to malicious files or websites.
Social media bots
Social media bots, or social bots, generate false social media activity such as fake accounts, follows, likes, or comments. By imitating human activity on social media platforms, they spam content, boost popularity, or spread misinformation.
Botnet
A botnet is a group of malicious bots that works together in a coordinated manner. The group performs tasks that require a high volume of computing power and memory. In order to save costs, bot creators may attempt to install bots on network-connected devices that belong to others. In doing this, they can control the bots remotely and plan to utilize computing power without paying for it.
How do malicious bots impact authorized users?
Malicious bots require targeted approaches to detect because they are frequently developed to evade humans and computers.
Consider these approaches to protect your IT systems against malicious bots:
Instill security awareness among employees. Train employees to avoid clicking on unknown or suspicious links in emails.
Use anti-malware programs and run regular scans to detect and isolate bots in computer systems.
Install a firewall to prevent bots from accessing your computer.
Strengthen bot protection and advanced threat detection software to prevent bots. For example, organizations use Amazon GuardDuty to block malicious bots and other malware.
Use CAPTCHA to stop distributed denial of service (DDoS) and spam bots from disrupting a web server. CAPTCHA is a challenge-response test that allows web servers to tell humans apart from bots.
Enforce strong endpoint security policies and regulate sharing of portable storage drives.
Use strong and non-repetitive passwords for different user accounts.
What is bot management?
Internet traffic to your applications can come from humans or bots. Blocking all bot traffic is not the right security approach, as several bots are useful. For example, allowing web crawlers is essential to ensure webpages appear in search engine results. Bot management is a strategic approach that helps companies separate good bot traffic from malicious bot activity. While malicious bots are harmful to computer systems, good bots help to enhance productivity, cost efficiency, and customer experience.
How do good bots benefit businesses?
Good bots help companies scale operations, improve customer engagement, and increase conversion. For example, companies use customer service bots to respond promptly to customer complaints. Citibot uses AWS to develop chatbots. By integrating Amazon Lex and Amazon Kendra, their chatbots reduce call center wait times by up to 90%.
Bots benefit businesses in many ways:
Extend operation hours and provide services at any time
Optimize existing resources and reach a wider audience
Free up human employees from tedious, repetitive tasks
Collect valuable data for analytics and business intelligence
What are common types of good bots?
Bot management involves using bot manager software to classify bots and enforce policies according to bot behavior. Bot managers use different methods to detect if a bot is important or not. The simplest bot detection method uses static analysis to categorize bots based on web activities. Some bot managers use CAPTCHAs to separate malicious bot traffic from human users. Meanwhile, advanced bot management solutions involve machine learning technologies that study the behavioral patterns of computer activities.
My Conclusion
There may be a such thing as a good bot, but personally I have never met one. If these AI bots, which claim they will do good for the environment by maximizing the use of renewable energy sources, require so much electrical power they we have to delay getting rid of coal-fired power plants and gas-powered plants, and crash the existing power grid for their learning programs in data centers before we develop enough sustainable energy sources, the bots will be the problem, not the solution.
Recommendations
If we had a Congress that was knowledgeable about the bot problem and hard at work developing regulations to deal with this, we might be able to work our way out of this. Unfortunately, the only problems being worked on by this Congress are how to ban abortion, promote white nationalism, destroy the separation of church and state by advocating the Ten Commandments in public schools. So the bots will continue to run amok.
If America’s law firms are too afraid to sue Google and Meta on behalf of journalists, there goes democracy.
Perhaps it’s time for another pirate era? I don’t know. Let’s steal a battle ship and shut down the bot empire before it can take us over completely?
Just a fantasy thought. I’ve been watching “Black Sails” on Netflix of late. They had some ideas on how to fight the British and Spanish empires in their day, and it was no liberal, commie, socialist plot. They were in it for the capitalism too. They just attacked the empire ships and stole their gold, silver and rubies so they could have a good life in Jamaica.
It didn’t last long. They were eventually put out of business by the capitalist empires.
But hey. At least they tried. These days everyone just seems content to let the capitalist bots push and shove them around. Nobody is standing up to them.
The only other solution I can think of is finding a hacker assistant like the girl in the book and movie “The Girl With the Dragon Tattoo.” If you saw that, you can see how this could end.
See you around. At least I’ve told my own story up until now.
___
If you support truth in reporting with no paywall, and fearless writing with no popup ads or sponsored content, consider making a contribution today with GoFundMe or Patreon or PayPal.
Before you continue, I’d like to ask if you could support our independent journalism as we head into one of the most critical news periods of our time in 2024.
The New American Journal is deeply dedicated to uncovering the escalating threats to our democracy and holding those in power accountable. With a turbulent presidential race and the possibility of an even more extreme Trump presidency on the horizon, the need for independent, credible journalism that emphasizes the importance of the upcoming election for our nation and planet has never been greater.
However, a small group of billionaire owners control a significant portion of the information that reaches the public. We are different. We don’t have a billionaire owner or shareholders. Our journalism is created to serve the public interest, not to generate profit. Unlike much of the U.S. media, which often falls into the trap of false equivalence in the name of neutrality, we strive to highlight the lies of powerful individuals and institutions, showing how misinformation and demagoguery can harm democracy.
Our journalists provide context, investigate, and bring to light the critical stories of our time, from election integrity threats to the worsening climate crisis and complex international conflicts. As a news organization with a strong voice, we offer a unique, outsider perspective that is often missing in American media.
Thanks to our unique reader-supported model, you can access the New American journal without encountering a paywall. This is possible because of readers like you. Your support keeps us independent, free from external influences, and accessible to everyone, regardless of their ability to pay for news.
Please help if you can.
American journalists need your help more than ever as forces amass against the free press and democracy itself. We must not let the crypto-fascists and the AI bots take over.
See the latest GoFundMe campaign here or click on this image.
Don't forget to listen to the new song and video.
Just because we are not featured on cable TV news talk shows, or TikTok videos, does not mean we are not getting out there in search engines and social media sites. We consistently get over a million hits a month.
Click to Advertise Here