-
Capabilities of general-purpose AI
- Generative capability (text, images, audio, video, etc...)
- Reasoning capability and other enhancements
-
Hallucinations are still a problem
- When doesn't know something it "create"
-
Continuos improvement
- best results in GPT 4 than 3.5
- More than 200 plugins in june 23
-
Code interpeter
- doing math
- data analysis
- Visualization
- interactive graphing
- image editing
- Image analysis
-
Memory GPT
- expands input capacity
- expands long-term memory
-
autoGPT
- automates steps needed to complete complex tasks
- AI does the rest
- Learning agents
-
Hype
-
Understanding the situation is critical
- About general purpose AI integration
- Must understand capabilities and dangers
-
Help or hurt
- help companies
- Warnings dangers drive public pressure for protective measures
- Regulations policy react no proact
-
Why fear/danger lens?
- great risk at play
- our tech has exceeded the limits
- AI has turned on the afterburners
- We need swift action
-
How LLM works
-
Procedural Algorithms vs. neural networks
-
Procedural
- Transparent
- Decision tree
-
Neural Network
- Black-Box
-
How LLMs build understanding
-
Elements
- Layers
- Embeddings
- Clustering
- Position
- Attention
-
Reinforcement Learning with human feedback (RLHF) Matters
- More nuanced responses
- Better alignment with human values
- Adaptation to new scenarios
- Error correction
-
Where thing are going wrong
-
Current and new short-term harms
- Hallucinations
- Deepfakes
- simulations
- social hacking to AI
-
Net good vs. net bad: the order matters
-
incredible potential, but also great risk
- Cyber-offense
- Deception
- Persuasion & manipulation
- Political strategy
- weapons adquisition
- long-horizon planning
- AI development
- -Situational awareness
- self-profiferation
- Order matters: we can't break society along the way
- Our culture are built on language, and we've given out the keys to manipulation
- Open source: why it's different this time
- Dangers of unregulates, decentralized AI
- AI demands novel solutions to bind powers and responsability
-
Ways things can go horribly wrong in the future
-
Multi-polar traps
- Situation in which everyone engages in harmful behavior not because they want to, but because they will lose otherwise
-
escaping
- Collaboration and communication
- Shared norms and values
- Incentives for cooperation
- Monitoring and enforcement
- Transparent decision-making processes
- Adaptative governance
- Long-Term perspective
- External intervention
-
AI & the economy
-
Incredible acceleration of the system
-
What ideas are being accelerated?
-
Prevailing socioeconomic systems
- Growth is good
- one can own land
- nature is a stock of resources to be converted to human purposes
- people are perfectly rational economic actors
-
The broken paradigm of today's extractive tech
- Give users what they want
- thecnology is neutral
- We've always had moral panics
- Maximize personalization
- Who are we to choose?
- Grow al all costs
- Obsess over metrics
- Capture attention
- Existencial Risk = (competition X Extraction) ^ Technology
-
Addressing misaligned financial incentives
- Price is always at the center
-
Thought leadership (set unified agenda; create and disseminate strategic language)
-
External pressure (drive a cultural awakening)
- Media
- Documentaries
- Books
- TV appearances
- Podcast
- Conferences
- Policy / Law
- Humane tech policy principles
- Advise global leaders & policymakers
- Litigatin shareholders Actions
- Education
- Families & Schools
- Toolkits & resources
-
Aspirational pressure (drive a shift toward humane technology)
- Product & Culture change
- Support and advise to tech companies
- Course (training on building humane technology)
- Workshops (general or topical)
- Mobilization
- Connect tech experts with social impact & policy
- Buid aligned community
- Community solutions library
- Toolkits & resources
-
Aligning our institutions with our tech
-
Democracies and collaborative problem-solving rely on two key faculties
- sensemaking: how we make sense of the world and reality
- Choicemaking: how we make wise choices
-
there are a "wisdom gap" created by runaway technology between:
-
Complexity of the issues
- Misinformation
- Cyber-attacks
- Nuclear escalation
- GPT4 & synthetic media
- Global financial risk
- Extremism
- AI arms race
- Planetary boundaries
- Synthetic biology
- ...
- Ability to make sense of the complexity
-
Alternatives to GDP
-
Genuine Progress Indicator, folds in big externalities like:;
- crime
- ozone depletion
- lost leisure time
- ...
- Bhutan's Gross National Happiness
- Thriving Places Index
- Happy Planetary Index
- Human Development Index
- Green Domestic Product
- Better life Index
-
Key takeaways
- Existencial Risk = (Competition X Extraction)^Technology
- A price-centered system needs interventions that connect tightly to price
- AI demands that we upgrade democratic functioning and institutions to keep up with our innovations
-
Deepening disparity
- Leaving behind people with less resources and people who are not the common case
-
Massive job losses likely (more for higher education levels) + much more wealth inequality
- Short-term: huge productivity gains for people using Al
- Soon after that: lots of layoffs
- More impact on cognitively intensive jobs (correlated to higher education levels)
- Big psychological/status hit for that group
- Less immediate impact on physically intensive jobs, but robots are increasingly capable
- Additional labor competition =lower wages for those who are left
- Cost of producing many goods &services likely to drop greatly
-
"Al can partly help you with your job" will translate to lost jobs:
- "There's a huge cost premium on work that has to be split across two people - there's the communication overhead, there's the miscommunication, there's everything else, and ifyou can make one person twice as productive you don't do as much as two people could do-maybeyou do as much as three and a halforfourpeople could do. - Sam Altman, OpenAl CEO
- Bachelor's 71% exposure and Master's or higher 63%
-
AI reinforces past patterns and can over-tailor individual risk assessments
-
Based on data
- Health coverage?
- Home loan?
- Disability insurance?
- ...
-
AI strengthens societal "defaults" and stereotypes
-
AI-Amplified Societal Conditioning Happens With:
- Gender
- Age
- Sexual Orientation
- Marital Status
- Race
- Parenting Roles
- Religion
- Criminal Record
- Economic Status
- Disability
- Nationality
- Mental health Stigma
-
Paths forward
-
A catastrophic mix of conditions
-
Frenetic Innovation
- ~10x/year is happening
- Intense competition, societal protections lagging
- Societal integration without understanding risk
-
Synthetic Media
- Easy creation of stunning synthetic media
- Very hard to tell real vs. synthetic
- Social media promotes engaging synthetic content
-
Distributed Access
- Global access complicates regulation
- Commodity hardware runs new models
- Countless options for malicious actors
-
Rising inequality
- Massive job losses likely
- capitalism prioritizes those with capital
- Risk of disenfranchisement and civil unrest
-
Transcendent innovation, requires transcendent clarity
- Why are we really doing all this innovation in the first place?
- Is our definition of "AI Alignment" sufficient?
- How well do you understand the conditions shaping you and your product?
- Thriving and Centering values modules at https://humantech.com/course
-
Centering what's important: 10 core capabilities
-
Nussbaum and Sen proposed a list of 10 core capabilities that societies should seek to foster a minimal threshold of, including:
- life, health and bodily integrity
- thinking, feeling and emotion
- affiliation
- play
- control over one's environment
-
AI & Kids
-
Relationships are the entry point
- Social media, phones, media, games
- Companies will supercharge their attempts to build these relationships with much more persuasive AI
- AI will magnify most of the known harms across all these domains and create huge $ opportunities
-
Opportunities
-
Mental healts support
- more accessible but
- huge risk AI chatbots on social media driven perverse incentives
-
Surgeon General Vivel Murthy
- Social media safe?
- evidence that SM harm young's peopole mental health
-
Education
- More interactive education
- Broader access
-
Resilience
-
Dangerous for kids accusstomed to get only what they want
- Ai's look great
- sound great
- support them
- ...
- instant gratification at an all-time high means resilience is at an all-time low
- ...
-
Reccaping solutions
-
Within the existing socioeconomic system
- interventions have to affect price
- Cultural awakening, legislation, litigation, insider pressure, inspiration,design support
- Blogal coordination needed
- Safety investments commensurate to growth
-
Getting AI aligned with society insede3 the voerning economic system
- Including haeavy input from social scientces + wisdom trditions
- Always centeringt human rights / growing disparities
- Centering values in an explicit way
- Transitioning to new systems
-
How can help
-
Learn more about AI
- Watch this talk again
- • Take our course: Foundations of Humane Technology
- Check out this ML safety course: https://course.mlsafety.org/
- • Check out AI tech is human's reading list
-
Speak up
- demonstrate nuanced thinking and balance
- weigh in on forums and discussions
- speak up where you have agency