The video explores the fascinating evolution of artificial intelligence and its impact on human society. The speaker sheds light on how computers and AI have emerged from a mere tool to an integral part of technological revolutions, from the industrial era to the digital age. The focus is placed on the exponential complexity AI brings, which can sometimes be beyond comprehension, making it difficult to control and thus challenging to trust.

Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. anthropocentric [ˌænθrəpəˈsɛntrɪk] - (adjective) - Considering human beings as the central or most significant entity of the universe. - Synonyms: (human-centered, human-focused, human-centric)

One of the difficulties with talking to you about AI is it's defined so many different ways, and unhelpfully many of those are anthropocentric.

2. teleonomic [ˌtɛln̩ˈɑːmɪk] - (adjective) - Related to or denoting the adaptive nature of biological processes or the purpose-driven aspects of evolution. - Synonyms: (adaptive, goal-directed, purposeful)

Instead, let's start with intelligence and take what's called a teleonomic definition, which removes the idea of consciousness and instead says, intelligence is anything that seeks a goal utilizing information interchange.

3. sequester [sɪˈkwɛstər] - (verb) - To isolate or hide away, often used in the context of environmental science to indicate capturing or storing carbon dioxide. - Synonyms: (isolate, seclude, store)

So a great goal for AI well within its remit over the next decade or two would be something like figure out a novel way to sequester carbon and reverse global warming.

4. provenance [ˈprɒvənəns] - (noun) - The history of ownership of a valued object or work of art or the origin of something. - Synonyms: (origin, source, background)

We've already got proof of origin and proof of originality and proof of ownership and proof of providence.

5. extensible [ɪkˈstɛnsəbəl] - (adjective) - Capable of being extended or expanded, especially in the context of systems or software that can add new functionality. - Synonyms: (expandable, scalable, extendable)

Tags are extensible, which is a fancy AI computer science way of saying extendable, which means you can put extra attributes in the tags.

6. cryptographic [ˌkrɪptəˈgræfɪk] - (adjective) - Related to the techniques of encryption and decryption for secure communication. - Synonyms: (encrypted, secure, encoded)

And these attributes will lend from proven cryptographic protocols like digital identity and digital certification and checksums and verification.

7. tampered [ˈtæmpərd] - (verb) - Interfere with something in order to cause damage or make unauthorized alterations. - Synonyms: (meddle, alter, interfere)

That's how we know that the content and the tag have not been tampered with.

8. align [əˈlaɪn] - (verb) - To arrange things so that they form a line or are in proper position, or to bring into agreement. - Synonyms: (coordinate, bring together, harmonize)

First of all, how do we trust the goal? This is called the alignment challenge, trusting that the goal of AI is aligned with our goals

9. nexus [ˈnɛksəs] - (noun) - A connection or series of connections linking two or more things. - Synonyms: (link, bond, connection)

They were the nexus of how computers and humans could read the same information.

10. ubiquitous [juːˈbɪkwɪtəs] - (adjective) - Present, appearing, or found everywhere, especially something so common as to be seemingly omnipresent. - Synonyms: (widespread, omnipresent, universal)

So tags are ubiquitous. They've been around since the 70s because they were the nexus of how computers and humans could read the same information.

The evolution of AI—how we can solve for trust - Matt Kuperholz - TEDxSydney

Hi, I'm Matt. I reckon it's a giveaway when someone spells their name with the at sign that they're probably a pretty serious computer nerd. And I have been in a deep and meaningful relationship with computers since the late 70s and have been using AI almost every day in anger and with love, since the late 90s. And I'm here to talk to you about the evolution of AI, some of the challenges we're facing to optimize this amazing technology for humanity and put forward a solution about how we can be doing more to face into those challenges. Sneak peek. The solution is called More.

One of the difficulties with talking to you about AI is it's defined so many different ways, and unhelpfully many of those are anthropocentric. It's about computers doing things like humans. Instead, let's start with intelligence and take what's called a teleonomic definition, which removes the idea of consciousness and instead says, intelligence is anything that seeks a goal utilizing information interchange. So AI is just simply something that seeks a goal with information interchange artificially. And if we think back to the earliest intelligences 4 billion years ago, our last universal common ancestor, which evolved into archaea and bacteria and then eukaryote into the kingdoms of plants and fungus and animals, all of these things are intelligent by that definition. And right up the top of that evolutionary tree, you'll find us, the apes, doing complex goals and exchanging lots of information.

But why is it that when compared to chimpanzees with 99% common DNA, we we're not just a little bit more intelligent, we're exponentially more intelligent? About 100,000 years ago, we took a risky, costly, and ultimately transformative bet on our brains and our greater cognitive reasoning and abstract thinking, and our ability to invent and use tools and systems of tools, which are technologies starting with things like fire and language and the first, virtual reality religion, moving further away and quickly with agriculture, writing, and the second, virtual reality money, and then even further with the technologies of the mathematical and scientific revolutions, with the industrial revolution and with the digital revolution, where for the first time, assets were no longer scarce and consumed, but plentiful and reusable in terms of data and code.

And at the pinnacle of these inventions, artificial intelligence, we now have created technologies that they themselves can seek goals and exchange information. And we feel it in our exponential eight bones that things have never felt like they're moving this quickly, but they're also never going to be this slow again. And AI is unique as a technology in that we no longer understand exactly how it does what it does. And if you've created a tool that's so complex that you don't understand it, then it's difficult to control. And if it's difficult to control, then it's difficult to trust.

And our challenges with trust, which affect our ability to optimize the value we get from this, relate back to those two areas of intelligence. First of all, how do we trust the goal? This is called the alignment challenge, trusting that the goal of AI is aligned with our goals. And Kransberg said in 1985, Technology, this is really important. Technology is neither good nor bad, nor is it neutral.

So a great goal for AI well within its remit over the next decade or two would be something like figure out a novel way to sequester carbon and reverse global warming, or cure all disease, or transmute microplastics in the environment into food, or fight against novel cyber attacks. But on the other hand, the bad goals could include invent new cyber attacks or other horrible weapons of mass destruction, or promote and spread misinformation, or create irrevocable social disharmony.

The other problem is with the information, because machine generated information has been doubling every year compared to human generated information, which means it will soon completely dwarf it and it's getting indistinguishable. How do you tell the real photo from the doctored one, the real movie from the deep fake, the anguished phone call from your child, from the scam, the research you're doing from the truth, or something made up by a machine in some sort of hallucination.

And it's this challenge with understanding the quality of the information, which is not bad just for humans in trusting, but also for machines that learn from this information to be useful tools from us. And if they keep digesting dubious machine quality information, then they're going to have a mad cow moment. And we already know that the models degrade. So we have to move from an environment where we trust by default to an environment where we distrust by default.

So here's my proposed solution. A simple idea worth spreading to bring trust back. It's called more stands for machine or human. It's a proposal to use a not for profit open source standard whereby we take tags, which are those angular brackets, to indelibly watermark all content as machine or human. No tag, no trust. I'm glad you like it.

So tags are ubiquitous. They've been around since the 70s because they were the nexus of how computers and humans could read the same information. Tags underpin how everything is presented on the World Wide web. And tags can be read and automatically updated and filtered by our phones, our browsers, our devices and our AI's.

Tags are extensible, which is a fancy AI computer science way of saying extendable, which means you can put extra attributes in the tags and these attributes will lend from proven cryptographic protocols like digital identity and digital certification and checksums and verification. So not only is it tagged as human, it's proved to come from a human.

We've solved for proof of humanity. That's why an AI can't open a bank account or get a passport. We've solved for authenticity. That's the padlock in your browser. That's how we know that the content and the tag have not been tampered with. We've already got proof of origin and proof of originality and proof of ownership and proof of providence.

And we can embed in these tags certification, assurance of responsible AI standards that help us trust that the goals of the AI are aligned with our expectations. We've had nutrition labels on our food for a long time which help us decide what to eat more is like a nutrition label for our information diet. It's like a pronoun, but for the benefit of the whole species and our technology. And I'm not just throwing pronouns out there to be glib. Remember that pronouns were not some top down idea and movement, but rather something that someone thought was a great idea worth spreading and it came bottom up.

And now here we are. Yeah, George Bernard Shaw said hell is to drift and heaven is to steer. AI is not good, AI is not bad, AI is not neutral. Nor does AI care if we get this right. Nature doesn't care either. We care. We should be doing more together as soon as possible to steer towards a bright AR future.

ARTIFICIAL INTELLIGENCE, SCIENCE, TECHNOLOGY, MACHINE LEARNING, DATA TRUST, AI ALIGNMENT, TEDX TALKS