Norstrats
  • News
  • Business
  • Entertainment
    • Gaming
    • Music
    • Sports
  • Lifestyle
    • Health
    • Fashion
    • Food
    • Fitness
  • Tech
    • App & Software
    • Digital Marketing
    • Gadget
    • PC & Mobile
    • Social Media
  • Education
No Result
View All Result
Norstrats
  • News
  • Business
  • Entertainment
    • Gaming
    • Music
    • Sports
  • Lifestyle
    • Health
    • Fashion
    • Food
    • Fitness
  • Tech
    • App & Software
    • Digital Marketing
    • Gadget
    • PC & Mobile
    • Social Media
  • Education
No Result
View All Result
Norstrats
No Result
View All Result

Don’t confuse ‘giant AI’ for what AI can really do

Ruchir by Ruchir
2 years ago
in News
0
Don’t confuse ‘giant AI’ for what AI can really do

Recently, ChatGPT and its ilk of ‘giant artificial intelligences’ (Bard, Chinchilla, PaLM, LaMDA, et al.), or gAIs, have been making several headlines.

ChatGPT is a large language model (LLM). This is a type of (transformer-based) neural network that is great at predicting the next word in a sequence of words. ChatGPT uses GPT4 – a model trained on a large amount of text on the internet, which its maker OpenAI could scrape and could justify as being safe and clean to train on. GPT4 has one trillion parameters now being applied in the service of, per the OpenAI website, ensuring the creation of “artificial general intelligence that serves all of humanity”.

Yet gAIs leave no room for democratic input: they are designed from the top-down, with the premise that the model will acquire the smaller details on its own. There are many use-cases intended for these systems, including legal services, teaching students, generating policy suggestions and even providing scientific insights. gAIs are thus intended to be a tool that automates what has so far been assumed to be impossible to automate: knowledge-work.

What is ‘high modernism’?

In his 1998 book Seeing Like a State, Yale University professor James C. Scott delves into the dynamics of nation-state power, both democratic and non-democratic, and its consequences for society. States seek to improve the lives of their citizens, but when they design policies from the top-down, they often reduce the richness and complexity of human experience to that which is quantifiable.

The current driving philosophy of states is, according to Prof. Scott, “high modernism” – a faith in order and measurable progress. He argues that this ideology, which falsely claims to have scientific foundations, often ignores local knowledge and lived experience, leading to disastrous consequences. He cites the example of monocrop plantations, in contrast to multi-crop plantations, to show how top-down planning can fail to account for regional diversity in agriculture.

The consequence of that failure is the destruction of soil and livelihoods in the long-term. This is the same risk now facing knowledge-work in the face of gAIs.

Why is high modernism a problem when designing AI? Wouldn’t it be great to have a one-stop shop, an Amazon for our intellectual needs? As it happens, Amazon offers a clear example of the problems resulting from a lack of diverse options. Such a business model yields only increased standardisation and not sustainability or craft, and consequently everyone has the same cheap, cookie-cutter products, while the local small-town shops die a slow death by a thousand clicks.

What do giant AIs abstract away?

Like the death of local stores, the rise of gAIs could lead to the loss of languages, which will hurt the diversity of our very thoughts. The risk of such language loss is due to the bias induced by models trained only on the languages that already populate the Internet, which is a lot of English (~60%). There are other ways in which a model is likely to be biased, including on religion (more websites preach Christianity than they do other religions, e.g.), sex and race.

At the same time, LLMs are unreasonably effective at providing intelligible responses. Science-fiction author Ted Chiang suggests that this is true because ChatGPT is a “blurry JPEG” of the internet, but a more apt analogy might be that of an atlas.

An atlas is a great way of seeing the whole world in snapshots. However, an atlas lacks multi-dimensionality. For example, I asked ChatGPT why it is a bad idea to plant eucalyptus trees in the West Medinipur district. It gave me several reasons why monoculture plantations are bad – but failed to supply the real reason people in the area opposed it: a monoculture plantation reduced the food they could gather.

That kind of local knowledge only comes from experience. We can call that ‘knowledge of the territory’. This knowledge is abstracted away by gAIs in favour of the atlas view of all that is present on the internet. The territory can only be captured by the people doing the tasks that gAIs are trying to replace.

How can diversity help?

A part of the failure to capture the territory is demonstrated in gAIs’ lack of understanding. If you are careful about what you ask them for (a feat called “prompt engineering” – an example of a technology warping the ecology of our behaviour), they can fashion impressive answers. But ask it the same question in a slightly different way and you can get complete rubbish. This trend has prompted computer scientists to call these systems “stochastic parrots” – that is, systems that can mimic language but are random in their behaviour.

Positive research directions exist as well. For example, BLOOM is an open-source LLM developed by scientists with public money and with extensive filtering of the training data. This model is also multilingual, including 10 Indian languages, plus an active ethics team that regularly updates the licence for use. 

There are multiple ways to thwart the risks posed by gAIs. One is to artificially slow the rate of progress in AI commercialisation to allow time for democratic inputs. (Tens of thousands of researchers have already signed a petition to this effect).

Another is to ensure there are diverse models being developed. ‘Diversity’ here implies multiple solutions to the same question, like independent cartographers preparing different atlases with different incentives: some will focus on the flora while others on the fauna. The research on diversity suggests that the more time passes before reaching a common solution, the better the outcome. And a better outcome is critical when dealing with the stakes involved in artificial general intelligence – an area of study in which a third of researchers believe it can lead to a nuclear-level catastrophe.

How could simply ‘assisting and augmenting’ be harmful?

Just to be clear, I wrote this article, not ChatGPT. But I wanted to check what it would say…

“Q: Write a response to the preceding text as ChatGPT.

A: As ChatGPT, I’m a tool meant to assist and augment human capabilities, not replace them; my goal is to understand and respond to your prompts, not to replace the richness and diversity of human knowledge and experience.”

Yet as the writer George Zarkadakis put it, “Every augmentation is also an amputation”. ChatGPT & co. may “assist and augment” but at the same time, they reduce the diversity of thoughts, solutions, and knowledge, and they currently do so without the inputs of the people meant to use them.

[ad_2]

Source link

Previous Post

US Trade Rep Tai exchanges objections with China’s commerce minister in Detroit meeting

Next Post

Biden, McCarthy reach tentative deal to raise debt ceiling, avoid calamitous U.S. default

Next Post
Biden, McCarthy reach tentative deal to raise debt ceiling, avoid calamitous U.S. default

Biden, McCarthy reach tentative deal to raise debt ceiling, avoid calamitous U.S. default

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • What To Expect During Your Pet’s First Visit To The Animal Clinic May 26, 2025
  • The Role Of Digital Impressions In Modern General Dentistry May 25, 2025
  • The Role Of A CPA In Tax Planning And Preparation May 22, 2025
  • How General Dentists Help Prevent And Treat Gum Disease May 22, 2025
  • The Role Of A General Dentist In Early Problem Detection May 21, 2025

Follow Us

Popular Posts

Plugin Install : Popular Post Widget need JNews - View Counter to be installed

About Us

NorStrats

Norstrat is a global integrated communications company that provides various services such as digital marketing, social media marketing, and business.

Contact Us: admin@norstrats.net

Menu

  • Pet
  • Real Estate
  • Tip & Trick
  • How-to
  • F95zoneus

Recent News

Pet

What To Expect During Your Pet’s First Visit To The Animal Clinic

May 26, 2025
Dentistry

The Role Of Digital Impressions In Modern General Dentistry

May 25, 2025
  • About Us
  • Contact US
  • Terms
  • Privacy Policy

© 2022 NorStrats. Design & Developed by F95 zone

No Result
View All Result
  • News
  • Business
  • Entertainment
    • Gaming
    • Music
    • Sports
  • Lifestyle
    • Health
    • Fashion
    • Food
    • Fitness
  • Tech
    • App & Software
    • Digital Marketing
    • Gadget
    • PC & Mobile
    • Social Media
  • Education

© 2022 NorStrats. Design & Developed by F95 zone

error: Content is protected !!
Go to mobile version