The public is aware of some of the most sophisticated artificial intelligence (AI) systems that have won games of chess or poker against human players. Other algorithms have a reputation for being able to recognize cats or for being unable to identify persons with darker skin tones.
But are modern AI systems more than just playthings? It’s interesting that they can play games and recognize animals, but does this contribute to the development of useful AI systems? We need to stand back and consider what the objectives of AI are in order to respond to this.
The core concept of AI is straightforward: By examining historical trends, we can make precise predictions about the future.
Every algorithm is based on this principle, from Google giving you advertisements for products it thinks you’ll want to buy to determining if a face in an image is you or your neighbor. AI is also being used to analyze scans and medical information to determine whether or not people have cancer.
Due to its ability to anticipate human deception, the poker-playing bot Pluribus was able to defeat the best poker players in the world in 2019.
It takes a tremendous quantity of data and the ability to digest it quickly in order to make predictions. For instance, Pluribus filters information from billions of card games in just a few milliseconds. In order to choose the best hand to play, it combines patterns and always refers to its data past. It never questions what it means to look forward.
Today are several algorithms out there that are highly effective at their jobs, some of which are so good that can surpass human specialists. Examples include Pluribus, AlphaGo, and Amazon Rekognition.
All of these instances demonstrate how effective AI may be at making predictions. Which task do you want it to excel at is the key.
There is the fact that human intelligence is general, whereas that of the (AI) artificial intelligence is narrow.
AI systems are only capable of doing one task. Pluribus, for instance, is so task-specific that it is unable to even play a different card game, such as blackjack, let alone operate a vehicle or make global policy.
This is a stark contrast to human intelligence. Our ability to generalize is one of our strongest assets. Over the course of our lives, we develop high levels of proficiency in a variety of talents, including learning how to walk, play card games, and write essays.
We may choose to focus on a couple of those abilities and even pursue them as careers, but we are still able to pick up new skills and complete other activities as they come up in our daily life.
Additionally, we have the ability to transfer skills, employing one set of information to pick up new ones in another. Fundamentally, AI systems don’t operate in this manner. The accuracy of their predictions is increased through billions of iterations and the sheer volume of calculations, and they learn through endless repetition—or at least until the energy cost becomes unmanageable.
AI has to start being able to have more generalizable and transferable intelligence if creators want it to be as adaptable as human intelligence.
Artificial general intelligence
The artificial general intelligence is changing how specific AI is. Artificial general intelligence is anticipated to change computers (AGI). AGIs will be able to perform multiple things at once, each at an expert level, much like humans can.
Although this type of AGI hasn’t yet been created, Irina Higgins, a research scientist at Google subsidiary DeepMind, believes we’re not too far off.
People believed that AGI was a wild pipe dream ten to fifteen years ago. They believed it would happen in 1,500 years or perhaps never. However, Higgins told DW, “it’s occurring now.
The modest plans call for using AGI to assist in finding solutions to some of science’s biggest questions, like how to explore space or treat cancer.
The story, however, becomes more science fiction than scientific as you read more about the potential of AGI; for example, supercomputers administering citywide bureaucracies or silicon, plastic, and metal entities posing as people. But then again, every technological advancement of the modern world were at one time, science fiction.
While breakthroughs in transformational AI fall squarely under the nonfiction category, AGI tends to veer more toward science fiction.
According to Eng Lim Goh, Chief Technology Officer at Hewlett Packard Enterprise, “humans are widening the activities a computer can accomplish, even if AI is very, very task-specific.”
Large Language Models is one of the earliest transformative AI systems that is already in use (LLMs).
In the beginning, LLMs began to autocorrect incorrect words in texts. They were then instructed on sentence autocompletion. And now that they have analyzed so much text data, they can speak with you, which is in reference to chatbots.
From there, LLMs’ powers have been progressively expanded. Systems can now respond to visuals as well as text.
However, bear in mind that these systems are still somewhat limited when compared to a person’s task. Texts and visuals cannot convey human meaning to LLMs. They lack human ability to employ texts and images in unique ways, explained Goh.
Some readers’ thoughts may now be turning to artificial intelligence (AI) “art”—algorithms like DALL-E 2 that create graphics from word input.
However, is this art? Is this proof that machines can produce art? It’s up for philosophical discussion, but many observers contend that AI just copies existing works of art rather than creating its own.
UK Government To Set Online Bill Criminalizing Self Harm
In an effort to stop what it calls “tragic and preventable deaths caused by people seeing self-harm content online,” the UK government has announced it will further broaden the scope of online safety legislation by making encouraging self-harm a crime.
According to the most recent modification to the divisive but popular Online Safety Bill, in-scope platforms would be compelled to remove anything that purposefully encourages someone to physically harm themselves, or face legal repercussions.
The government intends to tackle “abhorrent trolls urging the young and vulnerable to self-harm,” according to the secretary of state for digital. People who post such content online may also be prosecuted under the new offence of encouraging self injury.
The maximum fines will be announced in due time, according to the administration.
In the UK, it is already unlawful to promote or aid suicide, whether in person or online. By creating a new offense, self-harm content will now be subject to the same laws that already ban suicide promotion.
Following a snag, last summer associated with political unrest in the ruling Conservative Party, the Online Safety Bill’s progress through parliament is now on hold. However, the newly reorganized UK government has declared that it will reintroduce the measure to parliament next month after making changes to the law.
The abuse of intimate imagery is a problem that will be addressed by recent revisions to the Online Safety Bill, which was just made public by the Ministry of Justice. However, other revisions are planned regarding “legal but harmful” information, thus the final form of the Act is still up in the air.
The government responded to concerns about the bill’s impact on online freedom of expression a few months ago. The (new) secretary of state, Michelle Donelan, announced in September that she would be “editing” the bill to lessen concern about its impact on “legal but harmful” speech for adults.
The most recent changes, making it illegal to send online communications encouraging self harm, came after that announcement.
Donelan was quoted by the BBC as claiming that Molly Russell, a 14-year-old teenager who committed suicide five years ago after watching thousands of online articles on self-harm and suicide on websites like Instagram and Pinterest, was a factor in the most recent changes.
Social media was found to have contributed to Russell’s death, according to the results of an inquest into her death in September. While the coroner’s “prevention of future deaths” report from last month that a number of steps be done to control and monitor young people’s access to social media content.
The addition of the crime of promoting self harm, according to the Department for Digital, Culture, Media, and Sport, will outlaw “one of the most worrying and prevalent internet harms that now falls below the threshold of criminal behavior.”
Donelan stated in a statement:
“I am determined that the abhorrent trolls encouraging the young and vulnerable to self-harm are brought to justice.
“So I am strengthening our online safety laws to make sure these vile acts are stamped out and the perpetrators face jail time.
“Social media firms can no longer remain silent bystanders either and they’ll face fines for allowing this abusive and destructive behaviour to continue on their platforms under our laws.”
Hate crimes, rules regarding revenge porn (including disseminating deepfake porn without content), harassment, and cyberstalking are among the other top criminal offenses already mentioned in the bill.
Regardless of what the measure states on paper, there are still a lot of unknowns regarding how platforms will react to having legal obligations imposed on them to police all forms of speech, as well as if it would actually increase web user safety as claimed.
Critics worry that the regime will have a chilling effect by turning platforms into de facto speech police and encouraging them to overblock content in order to reduce their legal risk of paying a hefty fine.
The regime’s penalties scale up to 10% of global annual turnover, and non-cooperative senior executives even run the risk of going to jail.
On Monday, December 5, the bill is scheduled to return to parliament.
Twitter Amnesty Is What Elon Musk is Going For Next
Tesla CEO and newly appointed Twitter CEO, Elon Musk did promise a new dimension for the micro-blogging social media platform prior to taking over, and his actions recently, have just about lived up to the promise, but now, the billionaire is set for an ‘amnesty’ that surely will drive some political divides nuts if certain individuals are granted Twitter amnesty as he wants.
Elon Musk announced on Thursday that starting the next week, Twitter will provide suspended accounts “a general amnesty.” The day before, the platform’s CEO published a poll asking users if they thought affected accounts should be restored.
The announcement comes just after Musk lifted the platform’s restriction on former president Donald Trump after conducting a related poll. Trump declared he had no intention of returning to the platform despite being banned following the attack on the US Capitol on January 6, 2021.
Users of the Twitter platform who had their accounts suspended could rejoin the network “assuming they have not broken the law or engaged in egregious spam,” according to Musk’s user survey.
The survey received responses from about 3.2 million individuals, who voted 72.4% in favor of amnesty.
“The people have spoken. Amnesty begins next week. Vox Populi, Vox Dei,” Musk said, using a Latin phrase that means “The voice of the people is the voice of god.”
Historically, Twitter has deactivated accounts who advocate violence, celebrate hate and harassment, or persistently disseminate false information that may be harmful.
Some well-known people who were banned from the website include MyPillow CEO Mike Lindell, who made a number of claims that Trump actually won the 2020 presidential election, former Trump advisor and former executive chairman of Breitbart Steve Bannon, who said Anthony Fauci and FBI Director Christopher Wray should be beheaded, and Proud Boys founder Gavin McInnes, who broke the website’s rule against violent extremist groups.
Considering that more voices with possibly negative views will be returning to the site, it’s unclear from Musk’s brief post how Twitter will handle content control going forward.
These worries have only grown as a result of Musk’s huge firings and the outflow of workers who would rather leave than remain “hardcore.”
Elon Musk is surely growing more unpopular by remaining popular these days.
Twitter Working On New Feature For Long Texts
Writing a thread on Twitter can be considered daunting especially when you have to divide the text into 280-character sections for it to make meaning.
Good news though as the platform is stated to be working on a way to convert lengthy texts into threads automatically.
When a tweet exceeds the 280-character limit, Twitter’s composer will automatically divide it into a thread, according to a tweet from app researcher Jane Manchun Wong.
Twitter wants to make making threads less difficult, as she stated in a message to a user (identified as me).
Currently, in order to add a tweet to a thread and post the subsequent 280 characters, users must click the Add button. This can be particularly unpleasant when you are trying out an idea or pasting information from another document.
Several users have recently brought up the difficulty posting to and reading conversations with more than a few tweets; the thread in question was 82 tweets long and focused on the defunct crypto-currency exchange FTX. In response, Musk stated that the team is working to make thread writing simpler.
It will be useful to have markers to designate the start and end of a tweet in the thread, although the exact implementation details remain unknown, as Financial Times product manager Matt Taylor noted. This makes it simpler for users to change the text in a way that doesn’t disrupt the reading flow.
Musk has previously addressed the problem of posting lengthy tweets. He previously stated that the social network is developing the capability to attach long-form content to tweets. If that will be a standalone feature from the new thread composer is unclear.
Currently, some users rely on third-party programs like Typefully, ThreadStart, and Chirr App, which offer capabilities like scheduling along with tools to automatically divide your post into threads without interfering with sentence flow.
Thanks to its acquisition of Threader the previous year, the company today provides Twitter Blue customers with a simple way to read threads. However, Musk hasn’t actually stated whether he is altering the reading experience for the typical user.
There is already a long-form writing program on Twitter called Notes, but it is exclusively available to a small number of writers, and under Musk’s leadership, its future is unclear.
Even though Twitter programmers are already working on it, it is unclear when the new composer tool for threads will launch. Since taking over the business, Musk has let go of more than half the employees.
Numerous executives have left, and the new leader even gave the remaining employees yesterday an ultimatum: either be “hardcore” or quit. There is no assurance that goods will be delivered on time in this situation.
The new Twitter Blue plan with a verification mark was hurriedly launched by the firm, only for the scheme to be discontinued a few days later. Musk stated earlier this week that the launch date had been moved to later in the month.
Wong just found code that suggests Twitter is working on encrypting direct communications from end to end.
Technology4 weeks ago
Jack Dorsey Sends Apology To Sacked Twitter Staff
Technology4 weeks ago
The Big Startup Guns With The Most Funding In Africa
Technology4 weeks ago
Twitter Begins Testing The $8 Blue Tick Subscription On IOS
Technology4 weeks ago
Transcorp Power Raises Electricity Generation to 638mw In Months
Technology4 weeks ago
Top Logistics Startup Firms In Nigeria
Immigration3 weeks ago
Salzburg Global Seminar Providing Undergraduates Internship Privilege In Austria
Technology3 weeks ago
Amazon Prepares To Lay Off Staff
Technology3 weeks ago
Twitter Working On New Feature For Long Texts