FFFUTURES shares notes relevant to minds building future scenarios and extended realities. If you’d like to sign up, you can do it here.
SYNTHETIC UNIVERSES
The capacity to generate synthetic universes did not happen overnight.
The first step in the right direction to the creation of artificial simulations was discovered by accident during the DALL-E experiments.
With the evolution of neural network models (and the expansion in researchers' creativity), scientists stumbled upon the models' unexpected capabilities.
With the advent of Zero-Shot Learning (ZSL), researchers found a way to make models recognize objects they have never seen before. This approach led to a series of efforts that evolved models and made them able to infer context, fill in the blanks, and develop a rudimentary form of reasoning.
The impact this technology would have on certain processes and professions was never really considered. Yes, there were many good intentions around planning the long-term ethical implications of these neural networks, but it never went beyond that, good intentions.
The delegation of tasks to the machines went from mundane repetition-based tasks to iteration-focused exercises, and finally, to delegating core-human features like reasoning and creativity to these models.
The input to these systems went from "an armchair in the shape of an avocado", to "A 3-part novel in the fantasy genre capable of winning the Nebula award."
When these models were connected to the latest technology in 3D printing, the neural networks moved to disrupt the tangible realm. Everyday products were generated by simply using natural language processing.
Pointing a Lidar-capable device to a space in the living room and saying: "furnish this space with a modern living room ideal for a family of 3 and a corgi," was enough for the system to generate variations (all of them excellent options) for families to pick from.
Once selected, each object in the image was transformed into a production-ready 3D model that got 3D printed and delivered on the same day.
The evolution continued disappearing jobs and enabling more people to become curators. There was no more need for creators, the networks would both explore solutions and build the products. People only needed to command and curate.
It didn't happen overnight, but the lines of reality got blurry.
"A VR headset comfortable to use non-stop for one month."
"A game that simulates the life of a rockstar in the '70s."
"A drug that awakes 15% of my mind’s dormant capabilities."
Mix all of that, and you quickly get to our current state of proliferation of artificial simulations.
For every person, a simulated reality.
For every mind, a God contained in their own made synthetic universe.
👁️ Omnirealities
State of the Metaverse 2021
Eric Elliot’s thought-provoking article around the components that make the Metaverse, why it should be an open ecosystem, and the case for Non-Fungible Tokens as a decentralized way to enable true ownership of digital goods.
The metaverse, in a nutshell, is the digital world, where anything we can imagine can exist. Eventually, we’ll be connected to the metaverse all the time, extending our senses of sight, sound, and touch, blending digital items into the physical world, or popping into fully immersive 3D environments whenever we want. That family of technologies is known collectively as eXtended Reality (XR).
🔮 Future Scenarios
DeepMind’s AI makes gigantic leap in solving protein structures
In the previous issue, I shared notes about the role of protein designers and the future their industry is looking to enable.
This article continues on that same topic, but in this case, it is about the disruption and the possibilities brought by an AI developed by Google.
An artificial intelligence (AI) network developed by Google AI offshoot DeepMind has made a gargantuan leap in solving one of biology’s grandest challenges — determining a protein’s 3D shape from its amino-acid sequence.
A summary of the core ideas:
Proteins are the building blocks of life.
How a protein works and what it does is determined by its 3D shape.
Finding the complete structure of proteins is one of the objectives of molecular biology.
DeepMind’s Alpha Fold is the AI Network revolutionizing the field of protein structure prediction.
“The model from group 427 gave us our structure in half an hour, after we had spent a decade trying everything”
💀 Not a Cylon
Marylou Faure, @maryloufaure
Marylou Faure, @maryloufaure
Marylou Faure, @maryloufaure
🧠 Common Enemy
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
Erick Galinkin writes for the Montreal AI Ethics Institute a summary of a paper about the concept of unintended memorization, a risk in which generative sequence models (ML models) can inadvertently expose private information they have learned from unique training-data sequences.
Are you hacking with futures and other realities? Do you have comments, stories, or suggestions? I’d like to hear from you. Reach out: heyfffutures@gmail.com
FFFUTURES shares notes relevant to minds building future scenarios and extended realities. Join and never miss an update.