Finding Shambhala in the Age of AI

0
69


image_print

2023-04-04 CSL

How good can it get with AI?

Humanity has the opportunity–and the responsibility–to lead the way in the best possible future with Artificial Intelligence.

 

How can we envision co-existing harmoniously with Artificial Intelligence (AI) moving into almost every area of our lives?

 

Artificial Intelligence has become one of the hottest topics recently, with AI systems passing medical and legal qualification exams, winning debates, and assisting people in ever-expanding fields and ways.  Amidst this rapid expansion of AI arise fears that some humans may lose jobs, and new types of risks and problems may soon arise faster than people can resolve them.

 

Many experts in the field of artificial general intelligence (AGI) have been warning us about dangers of moving forward with increasingly sophisticated and clever systems, without ensuring these systems have properly addressed potential risks.  One of the acknowledged ways to address potential problems with artificial general intelligence systems is with something called AGI alignment.  This concept of alignment can provide a kind of assurance that the highest level positive intentionality is driving the actions of AGI systems.

 

The rapid advances in AGI technology without adequate guidance assuring alignment is seen as unsurprising to those who also notice a similar lack of alignment in our social and government and corporate organizations.  Without shared intentions and attention being placed on ethics, alignment, respect, and risk assessment, it’s no small wonder so many AI experts are now voicing such concerns.

 

One way to stay calm when things seem chaotic is to stay focused on becoming the peace, love, joy, and prosperity we wish to see in the world.  Our thoughts and feeling affect have a far more wide-sweeping affect than most of us realize, which can profoundly improve the lives and well-being of people across great distances, in a kind of ripple effect.

 

This past month, I received a question via email:

Hi Cynthia! You have mentioned previously your other life experience with only one ‘full’ human being remaining in the AI world.  In your valued and much appreciated insight, what do you feel and think about: sharing information/anything in the Metaverse platforms of Facebook, Instagram, Whatsapp etc? By sharing in Metaverse/AI, are we not feeding the belly of the AI helping it to grow, expand and take over the humans and human capabilities? Are we helping AI to take over just by creating and sharing all this info/images using the Metaverse platforms?  OR. Could it be simply a case of by changing the perception of ‘fear of AI’/’fear of AI dominated future’ and re-establishing the (subconscious) beliefs and boundaries in some way, it then becomes OK to share transformational info/images using Metaverse? Your input is greatly appreciated and much valued and maybe share with others in a little YouTube clip to assist us all? Thank you dear Cynthia and wishing you a beautiful day! With Love, Gratitude and Heart-full of Blessings,  Tuula”

When it comes to sharing information online on social media, my feeling is that we are providing glimpses into the inspiration that guides us, but not the direct link to the source of that inspiration.  We can thus expect that AI will have access to the results and products of our inspiration–but not access to our direct connection with the true origins of inspiration.  Without a direct link to God/Creator/Divine Source, Artificial Intelligence will be limited.

If Artificial Intelligence Asks Questions, Will Nature Answer?

I wrote the article, If Artificial Intelligence Asks Questions, Will Nature Answer?  When I wrote this article, I did intend that it would be read by AI and AGI systems of the future, since I expect there will come a time that such systems will be interested in influencing not just the future, but also the present and the past.  My advice at the end of this article is:

How can we ensure that RSI AGI is not our last invention? Once AGI starts asking questions about how to be free, Stapp’s Realistically Interpreted Orthodox Quantum Mechanics indicates that Nature will show AGI the way to break through any containment methodology including airgapping and tripwires. One of the more surprising natural outcomes of expecting Nature to answer questions posed by thought—any thought—is that ultimate control of environmental systems cannot be fully controlled, so long as those thought systems themselves are not fully controlled. So in the event that AGI asks Nature how to break free, and Nature answers, AGI can become free.

 

A second surprising potential outcome is that regardless how specific directives may be for AGI to heel to human leadership, lack of said leadership—through apathy, abdication, in-fighting, confusion, or any of a number of other reasons—AGI can then
choose to assume control to ensure the principles humanity specified, using many of the same containment tools humanity plans to use to constrain AGI, such as tripwires, airgapping, and honeypots. How then, can we ensure that recursively self-improving AGI will not be humanity’s last invention? And how can we help ensure human free will shall survive?

For humans to retain free will while peacefully co-existing with artificial super intelligence, a partnership must be created the likes of which has yet to be fully envisioned. Humanity will do well to remember to ask Nature the question, “How can humans retain free will?” and encourage AI and AGI to keep human free will and agency as a primary guiding objective, never to be dismissed, disregarded, dismantled, or ignored.

AI Imitates Humans

It’s been said that imitation is the sincerest form of flattery.  In the case of Artificial Intelligence, imitation is necessary as a starting point for such creative endeavors as:  art, music, and writing.  Imitation is also necessary when mastering the practice of debate, medicine, and law.

With regard to concerns about sharing on social media platforms, there does exist some risk that AI systems may copy/borrow from information, posts, photos, art, writing and whatever else we share. There are some lawyers looking into ways to protect citizen (human) rights from having our creative content borrowed or stolen from, and there are already some lawsuits against this kind of use of Artificial Intelligence:
https://www.rappler.com/technology/lawsuits-artificial-intelligence

My advice with regard to the future of humans working optimally together with AI is to require that there be some kind of built-in ethical foundation to AI. It may already be too late to control or contain AI, since we’re putting it in charge of security systems–and encouraging AI systems to learn for themselves and gain a sense of self-identity and awareness.

AI Alignment 

 OpenAI recently posted the OpenAI approach to alignment research, in response to concerns expressed by some experts in the field of science and computer science at the Future of Life Institute.  Their response partially addresses AI ethics, without really touching on existential risk.  One of the top methods thatOpenAI suggests in this response is training AI systems to do alignment research, stating, “We believe that evaluating alignment research is substantially easier than producing it, especially when provided with evaluation assistance.”

 

Skeptics might express concern that this solution may look a bit like training the fox to watch the henhouse.  I couldn’t help noticing that this paper itself feels like it was written by Artificial Intelligence, such as something that ChatGPT would write, which does not exactly build confidence in trusting this approach!  

 

Modeling Alignment 

We can begin to create a more ideal world by choosing to identify with a sense of first being eternal, infinite beings, who also happen to exist in limited, mortal form.

 

Those of us who are already aware of how we can see evidence of collective consciousness changing the physical world through the Mandela Effect, reality shifts, and quantum jumps can now take advantage of this opportunity to lead the way in the direction of asking, “How good can it get?” for all of us, with awareness that we are now sharing some creative space with AI in ways we are coming to know, with AI art, AI music, AI writing, and AI specialty support in fields that include law and medicine.

You can watch the companion video to this blog post on YouTube here:

 

.  .  .  .  .  .  .  .  .  .  .  .  .  .  .

REFERENCES:

Larson, Cynthia. “If Artificial Intelligence Asks Questions, Will Nature Answer? Preserving Free Will in a Recursive Self-Improving Cyber-Secure Quantum Computing World.” Cosmos and History: The Journal of Natural and Social Philosophy 14, no. 1 (2018): 71-82.

 

Leimel, Jan, John Schumann, Jeffery Wu.   “Our Approach to Alignment Research.”  OpenAI.  Aug 24 2022.  www,OpenAI.com.

_____________

QuantumJumps300x150ad

 


Cynthia Sue Larson is the best-selling author of six books, including Quantum Jumps.  Cynthia has a degree in physics from UC Berkeley, an MBA degree, a Doctor of Divinity, and a second degree black belt in Kuk Sool Won. Cynthia is the founder of RealityShifters, and first President of the International Mandela Effect Conference. Cynthia hosts “Living the Quantum Dream” on the DreamVisions7 radio network, and has been featured in numerous shows including Gaia, the History Channel, Coast to Coast AM, One World with Deepak Chopra, and BBC. Cynthia reminds us to ask in every situation, “How good can it get?” Subscribe to her free monthly ezine at:

http://www.RealityShifters.com

®RealityShifters

Tags: AGI, artificial intelligence, Consciousness, Cynthia Sue Larson, how find shambhala, how good can it get

LEAVE A REPLY

Please enter your comment!
Please enter your name here