Page 112

Category Archives: Artificial Super Intelligence

AI singularity may come in 2027 with artificial ‘super intelligence’ sooner than we think, says top scientist – Livescience.com

Posted: March 8, 2024 at 6:22 am

Humanity could create an artificial intelligence (AI) agent that is just as smart as humans in as soon as the next three years, a leading scientist has claimed.

Ben Goertzel, a computer scientist and CEO of SingularityNET, made the claim during the closing remarks at the Beneficial AGI Summit 2024 on March 1 in Panama City, Panama. He is known as the "father of AGI" after helping to popularize the term artificial general intelligence (AGI) in the early 2000s.

The best AI systems in deployment today are considered "narrow AI" because they may be more capable than humans in one area, based on training data, but can't outperform humans more generally. These narrow AI systems, which range from machine learning algorithms to large language models (LLMs) like ChatGPT, struggle to reason like humans and understand context.

However, Goertzel noted AI research is entering a period of exponential growth, and the evidence suggests that artificial general intelligence (AGI) where AI becomes just as capable as humans across several areas independent of the original training data is within reach. This hypothetical point in AI development is known as the "singularity."

Goertzel suggested 2029 or 2030 could be the likeliest years when humanity will build the first AGI agent, but that it could happen as early as 2027.

Related: Artificial general intelligence when AI becomes more capable than humans is just moments away, Meta's Mark Zuckerberg declares

If such an agent is designed to have access to and rewrite its own code, it could then very quickly evolve into an artificial super intelligence (ASI) which Goertzel loosely defined as an AI that has the cognitive and computing power of all of human civilization combined.

"No one has created human-level artificial general intelligence yet; nobody has a solid knowledge of when we're going to get there. I mean, there are known unknowns and probably unknown unknowns. On the other hand, to me it seems quite plausible we could get to human-level AGI within, let's say, the next three to eight years," Goertzel said.

He pointed to "three lines of converging evidence" to support his thesis. The first is modeling by computer scientist Ray Kurzweil in the book "The Singularity is Near" (Viking USA, 2005), which has been refined in his forthcoming book "The Singularity is Nearer" (Bodley Head, June 2024). In his book, Kurzweil built predictive models that suggest AGI will be achievable in 2029, largely centering on the exponential nature of technological growth in other fields.

Goertzel also pointed to improvements made to LLMs within a few years, which have "woken up so much of the world to the potential of AI." He clarified LLMs in themselves will not lead to AGI because the way they show knowledge doesn't represent genuine understanding, but that LLMs may be one component in a broad set of interconnected architectures.

The third piece of evidence, Goertzel said, lay in his work building such an infrastructure, which he has called "OpenCog Hyperon," as well as associated software systems and a forthcoming AGI programming language, dubbed "MeTTa," to support it.

OpenCog Hyperon is a form of AI infrastructure that involves stitching together existing and new AI paradigms, including LLMs as one component. The hypothetical endpoint is a large-scale distributed network of AI systems based on different architectures that each help to represent different elements of human cognition from content generation to reasoning.

Such an approach is a model other AI researchers have backed, including Databricks CTO Matei Zaharia in a blog post he co-authored on Feb. 18 on the Berkeley Artificial Intelligence Research (BAIR) website.

Goertzel admitted, however, that he "could be wrong" and that we may need a "quantum computer with a million qubits or something."

"My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI unless the AGI threatens to throttle its own development out of its own conservatism," Goertzel added. "I think once an AGI can introspect its own mind, then it can do engineering and science at a human or superhuman level. It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion. That may lead to an increase in the exponential rate beyond even what Ray [Kurzweil] thought."

See more here:

AI singularity may come in 2027 with artificial 'super intelligence' sooner than we think, says top scientist - Livescience.com

Posted in Artificial Super Intelligence | Comments Off on AI singularity may come in 2027 with artificial ‘super intelligence’ sooner than we think, says top scientist – Livescience.com

AI in Education – EducationNext

Posted: August 8, 2023 at 10:56 am

In Neal Stephensons 1995 science fiction novel, The Diamond Age, readers meet Nell, a young girl who comes into possession of a highly advanced book, The Young Ladys Illustrated Primer. The book is not the usual static collection of texts and images but a deeply immersive tool that can converse with the reader, answer questions, and personalize its content, all in service of educating and motivating a young girl to be a strong, independent individual.

Such a device, even after the introduction of the Internet and tablet computers, has remained in the realm of science fictionuntil now. Artificial intelligence, or AI, took a giant leap forward with the introduction in November 2022 of ChatGPT, an AI technology capable of producing remarkably creative responses and sophisticated analysis through human-like dialogue. It has triggered a wave of innovation, some of which suggests we might be on the brink of an era of interactive, super-intelligent tools not unlike the book Stephenson dreamed up for Nell.

Sundar Pichai, Googles CEO, calls artificial intelligence more profound than fire or electricity or anything we have done in the past. Reid Hoffman, the founder of LinkedIn and current partner at Greylock Partners, says, The power to make positive change in the world is about to get the biggest boost its ever had. And Bill Gates has said that this new wave of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.

Over the last year, developers have released a dizzying array of AI tools that can generate text, images, music, and video with no need for complicated coding but simply in response to instructions given in natural language. These technologies are rapidly improving, and developers are introducing capabilities that would have been considered science fiction just a few years ago. AI is also raising pressing ethical questions around bias, appropriate use, and plagiarism.

In the realm of education, this technology will influence how students learn, how teachers work, and ultimately how we structure our education system. Some educators and leaders look forward to these changes with great enthusiasm. Sal Kahn, founder of Khan Academy, went so far as to say in a TED talk that AI has the potential to effect probably the biggest positive transformation that education has ever seen. But others warn that AI will enable the spread of misinformation, facilitate cheating in school and college, kill whatever vestiges of individual privacy remain, and cause massive job loss. The challenge is to harness the positive potential while avoiding or mitigating the harm.

What Is Generative AI?

Artificial intelligence is a branch of computer science that focuses on creating software capable of mimicking behaviors and processes we would consider intelligent if exhibited by humans, including reasoning, learning, problem-solving, and exercising creativity. AI systems can be applied to an extensive range of tasks, including language translation, image recognition, navigating autonomous vehicles, detecting and treating cancer, and, in the case of generative AI, producing content and knowledge rather than simply searching for and retrieving it.

Foundation models in generative AI are systems trained on a large dataset to learn a broad base of knowledge that can then be adapted to a range of different, more specific purposes. This learning method is self-supervised, meaning the model learns by finding patterns and relationships in the data it is trained on.

Large Language Models (LLMs) are foundation models that have been trained on a vast amount of text data. For example, the training data for OpenAIs GPT model consisted of web content, books, Wikipedia articles, news articles, social media posts, code snippets, and more. OpenAIs GPT-3 models underwent training on a staggering 300 billion tokens or word pieces, using more than 175 billion parameters to shape the models behaviornearly 100 times more data than the companys GPT-2 model had.

By doing this analysis across billions of sentences, LLM models develop a statistical understanding of language: how words and phrases are usually combined, what topics are typically discussed together, and what tone or style is appropriate in different contexts. That allows it to generate human-like text and perform a wide range of tasks, such as writing articles, answering questions, or analyzing unstructured data.

LLMs include OpenAIs GPT-4, Googles PaLM, and Metas LLaMA. These LLMs serve as foundations for AI applications. ChatGPT is built on GPT-3.5 and GPT-4, while Bard uses Googles Pathways Language Model 2 (PaLM 2) as its foundation.

Some of the best-known applications are:

ChatGPT 3.5. The free version of ChatGPT released by OpenAI in November 2022. It was trained on data only up to 2021, and while it is very fast, it is prone to inaccuracies.

ChatGPT 4.0. The newest version of ChatGPT, which is more powerful and accurate than ChatGPT 3.5 but also slower, and it requires a paid account. It also has extended capabilities through plug-ins that give it the ability to interface with content from websites, perform more sophisticated mathematical functions, and access other services. A new Code Interpreter feature gives ChatGPT the ability to analyze data, create charts, solve math problems, edit files, and even develop hypotheses to explain data trends.

Microsoft Bing Chat. An iteration of Microsofts Bing search engine that is enhanced with OpenAIs ChatGPT technology. It can browse websites and offers source citations with its results.

Google Bard. Googles AI generates text, translates languages, writes different kinds of creative content, and writes and debugs code in more than 20 different programming languages. The tone and style of Bards replies can be finetuned to be simple, long, short, professional, or casual. Bard also leverages Google Lens to analyze images uploaded with prompts.

Anthropic Claude 2. A chatbot that can generate text, summarize content, and perform other tasks, Claude 2 can analyze texts of roughly 75,000 wordsabout the length of The Great Gatsbyand generate responses of more than 3,000 words. The model was built using a set of principles that serve as a sort of constitution for AI systems, with the aim of making them more helpful, honest, and harmless.

These AI systems have been improving at a remarkable pace, including in how well they perform on assessments of human knowledge. OpenAIs GPT-3.5, which was released in March 2022, only managed to score in the 10th percentile on the bar exam, but GPT-4.0, introduced a year later, made a significant leap, scoring in the 90th percentile. What makes these feats especially impressive is that OpenAI did not specifically train the system to take these exams; the AI was able to come up with the correct answers on its own. Similarly, Googles medical AI model substantially improved its performance on a U.S. Medical Licensing Examination practice test, with its accuracy rate jumping to 85 percent in March 2021 from 33 percent in December 2020.

These two examples prompt one to ask: if AI continues to improve so rapidly, what will these systems be able to achieve in the next few years? Whats more, new studies challenge the assumption that AI-generated responses are stale or sterile. In the case of Googles AI model, physicians preferred the AIs long-form answers to those written by their fellow doctors, and nonmedical study participants rated the AI answers as more helpful. Another study found that participants preferred a medical chatbots responses over those of a physician and rated them significantly higher, not just for quality but also for empathy. What will happen when empathetic AI is used in education?

Other studies have looked at the reasoning capabilities of these models. Microsoft researchers suggest that newer systems exhibit more general intelligence than previous AI models and are coming strikingly close to human-level performance. While some observers question those conclusions, the AI systems display an increasing ability to generate coherent and contextually appropriate responses, make connections between different pieces of information, and engage in reasoning processes such as inference, deduction, and analogy.

Despite their prodigious capabilities, these systems are not without flaws. At times, they churn out information that might sound convincing but is irrelevant, illogical, or entirely falsean anomaly known as hallucination. The execution of certain mathematical operations presents another area of difficulty for AI. And while these systems can generate well-crafted and realistic text, understanding why the model made specific decisions or predictions can be challenging.

The Importance of Well-Designed Prompts

Using generative AI systems such as ChatGPT, Bard, and Claude 2 is relatively simple. One has only to type in a request or a task (called a prompt), and the AI generates a response. Properly constructed prompts are essential for getting useful results from generative AI tools. You can ask generative AI to analyze text, find patterns in data, compare opposing arguments, and summarize an article in different ways (see sidebar for examples of AI prompts).

One challenge is that, after using search engines for years, people have been preconditioned to phrase questions in a certain way. A search engine is something like a helpful librarian who takes a specific question and points you to the most relevant sources for possible answers. The search engine (or librarian) doesnt create anything new but efficiently retrieves whats already there.

Generative AI is more akin to a competent intern. You give a generative AI tool instructions through prompts, as you would to an intern, asking it to complete a task and produce a product. The AI interprets your instructions, thinks about the best way to carry them out, and produces something original or performs a task to fulfill your directive. The results arent pre-made or stored somewheretheyre produced on the fly, based on the information the intern (generative AI) has been trained on. The output often depends on the precision and clarity of the instructions (prompts) you provide. A vague or poorly defined prompt might lead the AI to produce less relevant results. The more context and direction you give it, the better the result will be. Whats more, the capabilities of these AI systems are being enhanced through the introduction of versatile plug-ins that equip them to browse websites, analyze data files, or access other services. Think of this as giving your intern access to a group of experts to help accomplish your tasks.

One strategy in using a generative AI tool is first to tell it what kind of expert or persona you want it to be. Ask it to be an expert management consultant, a skilled teacher, a writing tutor, or a copy editor, and then give it a task.

Prompts can also be constructed to get these AI systems to perform complex and multi-step operations. For example, lets say a teacher wants to create an adaptive tutoring programfor any subject, any grade, in any languagethat customizes the examples for students based on their interests. She wants each lesson to culminate in a short-response or multiple-choice quiz. If the student answers the questions correctly, the AI tutor should move on to the next lesson. If the student responds incorrectly, the AI should explain the concept again, but using simpler language.

Previously, designing this kind of interactive system would have required a relatively sophisticated and expensive software program. With ChatGPT, however, just giving those instructions in a prompt delivers a serviceable tutoring system. It isnt perfect, but remember that it was built virtually for free, with just a few lines of English language as a command. And nothing in the education market today has the capability to generate almost limitless examples to connect the lesson concept to students interests.

Chained prompts can also help focus AI systems. For example, an educator can prompt a generative AI system first to read a practice guide from the What Works Clearinghouse and summarize its recommendations. Then, in a follow-up prompt, the teacher can ask the AI to develop a set of classroom activities based on what it just read. By curating the source material and using the right prompts, the educator can anchor the generated responses in evidence and high-quality research.

However, much like fledgling interns learning the ropes in a new environment, AI does commit occasional errors. Such fallibility, while inevitable, underlines the critical importance of maintaining rigorous oversight of AIs output. Monitoring not only acts as a crucial checkpoint for accuracy but also becomes a vital source of real-time feedback for the system. Its through this iterative refinement process that an AI system, over time, can significantly minimize its error rate and increase its efficacy.

Uses of AI in Education

In May 2023, the U.S. Department of Education released a report titled Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. The department had conducted listening sessions in 2022 with more than 700 people, including educators and parents, to gauge their views on AI. The report noted that constituents believe that action is required now in order to get ahead of the expected increase of AI in education technologyand they want to roll up their sleeves and start working together. People expressed anxiety about future potential risks with AI but also felt that AI may enable achieving educational priorities in better ways, at scale, and with lower costs.

AI could serveor is already servingin several teaching-and-learning roles:

Instructional assistants. AIs ability to conduct human-like conversations opens up possibilities for adaptive tutoring or instructional assistants that can help explain difficult concepts to students. AI-based feedback systems can offer constructive critiques on student writing, which can help students fine-tune their writing skills. Some research also suggests certain kinds of prompts can help children generate more fruitful questions about learning. AI models might also support customized learning for students with disabilities and provide translation for English language learners.

Teaching assistants. AI might tackle some of the administrative tasks that keep teachers from investing more time with their peers or students. Early uses include automated routine tasks such as drafting lesson plans, creating differentiated materials, designing worksheets, developing quizzes, and exploring ways of explaining complicated academic materials. AI can also provide educators with recommendations to meet student needs and help teachers reflect, plan, and improve their practice.

Parent assistants. Parents can use AI to generate letters requesting individualized education plan (IEP) services or to ask that a child be evaluated for gifted and talented programs. For parents choosing a school for their child, AI could serve as an administrative assistant, mapping out school options within driving distance of home, generating application timelines, compiling contact information, and the like. Generative AI can even create bedtime stories with evolving plots tailored to a childs interests.

Administrator assistants. Using generative AI, school administrators can draft various communications, including materials for parents, newsletters, and other community-engagement documents. AI systems can also help with the difficult tasks of organizing class or bus schedules, and they can analyze complex data to identify patterns or needs. ChatGPT can perform sophisticated sentiment analysis that could be useful for measuring school-climate and other survey data.

Though the potential is great, most teachers have yet to use these tools. A Morning Consult and EdChoice poll found that while 60 percent say theyve heard about ChatGPT, only 14 percent have used it in their free time, and just 13 percent have used it at school. Its likely that most teachers and students will engage with generative AI not through the platforms themselves but rather through AI capabilities embedded in software. Instructional providers such as Khan Academy, Varsity Tutors, and DuoLingo are experimenting with GPT-4-powered tutors that are trained on datasets specific to these organizations to provide individualized learning support that has additional guardrails to help protect students and enhance the experience for teachers.

Googles Project Tailwind is experimenting with an AI notebook that can analyze student notes and then develop study questions or provide tutoring support through a chat interface. These features could soon be available on Google Classroom, potentially reaching over half of all U.S. classrooms. Brisk Teaching is one of the first companies to build a portfolio of AI services designed specifically for teachersdifferentiating content, drafting lesson plans, providing student feedback, and serving as an AI assistant to streamline workflow among different apps and tools.

Providers of curriculum and instruction materials might also include AI assistants for instant help and tutoring tailored to the companies products. One example is the edX Xpert, a ChatGPT-based learning assistant on the edX platform. It offers immediate, customized academic and customer support for online learners worldwide.

Regardless of the ways AI is used in classrooms, the fundamental task of policymakers and education leaders is to ensure that the technology is serving sound instructional practice. As Vicki Phillips, CEO of the National Center on Education and the Economy, wrote, We should not only think about how technology can assist teachers and learners in improving what theyre doing now, but what it means for ensuring that new ways of teaching and learning flourish alongside the applications of AI.

Original post:

AI in Education - EducationNext

Posted in Artificial Super Intelligence | Comments Off on AI in Education – EducationNext

The Role of Artificial Intelligence in the Future of Media – Fagen wasanni

Posted: at 10:56 am

There has been some confusion and concern among people about the role of artificial intelligence (AI) in our lives. However, AI is simply a technology that can perform tasks requiring human intelligence. It learns from data and improves its performance over time. AI has the potential to drive nearly 45% of the economy by 2023.

AI can be categorized into three types: Narrow AI, General AI, and Super AI. Narrow AI is designed for specific tasks, while General AI can perform any intellectual task that a human can do, although it doesnt exist yet. Super AI is purely theoretical and surpasses human intelligence in every aspect.

For media companies, AI applications like content personalization, automated content generation, sentiment analysis, and audience targeting can greatly benefit content delivery and audience engagement. AI can analyze customer data for targeted marketing campaigns, create personalized content, predict customer behavior, analyze visual content, and assist in social media management.

Companies can transition to AI by identifying pain points, collecting and preparing relevant data, starting with narrow applications, collaborating with AI experts, and forming a task force to integrate AI across the organization. AI can automate repetitive tasks, enhance decision-making, and free up human resources for more strategic work.

However, it is important for brands to maintain authenticity and embrace diversity while using AI for marketing. AI algorithms are only as unbiased as the data they are trained on, so brands should use diverse data and establish ethical guidelines to mitigate biases. Human creativity and understanding are irreplaceable, and brands should emphasize the importance of human-AI collaboration.

Overall, AI has the potential to revolutionize the media industry by improving customer experiences, optimizing operations, and delivering relevant content. It is crucial for companies to understand and leverage the power of AI to stay competitive in the evolving digital landscape.

Read the original:

The Role of Artificial Intelligence in the Future of Media - Fagen wasanni

Posted in Artificial Super Intelligence | Comments Off on The Role of Artificial Intelligence in the Future of Media – Fagen wasanni

Can AI Help Me Find the Right Running Shoes? – CNET

Posted: at 10:56 am

Like a lot of other runners, I obsess over shoes. Compared with other sports, running doesn't require a lot in terms of equipment, but you can't cut corners when it comes to your feet.

For me, a good fit and comfort are most important, but I also don't want shoes that will slow me down. Super-cushioned sneakers might be great if you're doing a loop around the neighborhood with your friends, or if your job requires you to spend all day on your feet, but not when you're trying to cut a few minutes off a race time.

That search for the perfect combination has felt like a never-ending quest since I started running a couple years ago. Now, training for my very first marathon, the TCS New York City Marathon on Nov. 5, the stakes are higher than ever. So when I was offered the chance to try out Fleet Feet's new and improved shoe-fitting software that's powered by artificial intelligence, I went for it.

But that doesn't mean I wasn't skeptical about its capabilities. Up until recently, a lot of consumer-facing AI has been more hype than reality. Meanwhile, I've been shopping at Fleet Feet, a national chain of specialty running stores, since shortly after joining my neighborhood running group in March 2022.

For more than a year, the company's in-house shoe nerds, which Fleet Feet refers to as outfitters, have largely kept my feet happy. They've answered all of my nitpicky questions and their recommendations changed as my running needs and goals evolved over time.

How does AI play into that?

In this case, AI provides a way to let store employees quickly compare the specific dimensions of my feet with those of millions of others, along with the designs of the shoes in their inventory, to pick out which ones might fit me the best.

The AI isn't designed to replace expert employees, it just gives them a better starting point for finding shoes with the correct fit, says Michael McShane, the retail experience manager for the New York store I visited.

"It turns the data into something much more understandable for the consumer," McShane says. "I'm still here to give you an expert assessment, teach you what the data says and explain why it's better to come here than going to a kind of generic store."

Anyone who's ever set foot, so to speak, in a running store knows there are lots and lots of shoes out there, and everyone's feet are different. What could feel like a great shoe to one person, could be absolute torture to run in for another.

A look at some of the data collected by a Fleet Feet fit id scan.

Originally rolled out in 2018, Fleet Feet's Fit Engine software analyzes the shapes of both of a runner's feet (collected through a 3D scan process called Fit ID) taking precise measurements in four different areas. It looks at not just how long a person's feet are, but also how high their arches are, how wide their feet are across the toes and how much room they need at their heel.

Plates in the scanner also measure how a person stands and carries their weight. Importantly, the scanner looks at both feet. Runners especially put their feet through a lot of use and abuse, making it likely that their feet will be shaped differently,

Mine were no exception, One of my feet measured more than a half size bigger than the other. I can't say I was surprised. In addition to ramping my training up to an average of 20 miles a week over the past year, my feet have also suffered through 17 years on the mean streets of New York, two pregnancies and one foot injury that left me with a wonky right big toe.

What was a little surprising was both feet measured bigger than my usual size 9 or 9.5. I've always had big feet, especially for a woman that stands just over 5 feet tall, but I'll admit that it was still a little traumatizing to be trying on shoes a full size larger than that for the first time.

The software's AI capabilities allow the system to then quickly compare the data from a customer's scan to all of the shoes in the store's inventory, as well as the millions of other foot scans in the system. Each shoe is graded as to how its measurements matched up with the customer's. Color-coded graphics show how each shoe measures up in specific areas.

While store employees have used versions of the software including the AI over the years, Fleet Feet says the latest improvements make it consumer facing for the first time, instead of something that takes place completely behind the scenes. The ultimate goal is to add it to the company's website to make it easier to find shoes that fit online, something that's notoriously tricky even for the biggest running shoe enthusiasts.

In addition to telling McShane and me how well a shoe could potentially fit, the software gave me a specific starting size to try on, since sizing can vary depending on shoe brand and model.

And I sure did try on shoes. The AI gave McShane loads of suggestions to start with, but it was up to him to narrow it down for me, taking into account my training needs and preferences. Ultimately, I wanted something cushioned and comfortable enough to get me through a marathon, but still light and agile enough that I wouldn't feel clunky or weighed down.

I also wanted something new. After a year of almost religiously wearing Hoka Cliftons for everyday runs, they now felt too bulky and slow. I also liked the Brooks Ghost trainers, but more for walking around New York than racing.

And I was more than happy to say goodbye to a pair of Nike Zoom Fly 5 shoes that I bought for the NYC Half Marathon. Their carbon-fiber plates and light construction made them super speedy, but their lack of heel cushioning gave me monster blisters that would explode and bleed. Sure I could have taken them back, but I liked their speed so much I just tapped my feet up every time I wore them to protect against the rubbing.

The MIzuno Wave Rider 26.

I spent well over an hour at Fleet Feet trying all kinds of shoes. Since the AI had pinpointed the appropriate size for each model, the sizes I tried on varied but they all pretty much fit. That in itself was a time saver. The main challenge was figuring out what felt the most comfortable when I took a jog around the store.

A pair of Brooks Glycerin felt cushy, but also a bit clunky. I loved a pair of Diadoras from Italy, but they ran small and the store didn't have my size, which probably would have been a monster 10.5, in stock. Conversely, a New Balance model I tried seemed too roomy to give me enough support.

For me, it was about finding the right level of cushioning and weight. Per McShane's advice, I tried my best to ignore colors. When it comes to running shoes, I'm a big fan of bright, fun colors, but looks don't help with comfort or cut seconds off your mile pace.

After many, many boxes, it came down to the Asics Gel-Cumulus and Mizuno Wave Rider (both $140). Both were light and springy and I took more than one jog around the store in both of them. I also tried them out with a new pair of insoles ($55), which also were fitted to me with the help of the AI.

I've never used insoles before, but I was told that they would give me greater support for the kind of double-digit mile training I had ahead of me, improving my endurance and reducing the chance of injury. Socks are also key to preventing dreaded blisters, so I grabbed a pair of my go-to Feetures Elite Ultra Lights ($18).

After much debate, I ended up walking out of the store with the Mizunos. While I've had Asics in the past, I've never tried Mizunos before. They seemed a bit faster and more tailored to my feet than the Asics were. It also turned out that they were on sale and I ended up getting them for $105.

That's because there's a new version rolling out that the store didn't have in stock yet, so they weren't in the system for the AI to find. While it was nice to save $35, had I known that I might have gone with the Asics just because they're more current.

After four runs totaliing about25 miles, I still like the shoes, though the insoles have taken a little getting used to, but I'm also thinking about picking up a pair of the Asics just to compare.

For most people, this use of AI will probably go unnoticed, at least until it's added to the website. While officially now geared to the consumer, it still seems more of a tool for store employees. Sure, data-crunching AI can be great, but it's the efforts and expert advice of the outfitters themselves that are going to ensure that I keep coming back to their stores.

After all, the TCS NYC Marathon isn't until Nov. 5 and I've got a long road of many miles and many, many pairs of shoes ahead of me before I reach the starting line.

Read this article:

Can AI Help Me Find the Right Running Shoes? - CNET

Posted in Artificial Super Intelligence | Comments Off on Can AI Help Me Find the Right Running Shoes? – CNET

AI is revolutionizing manual cell counting – Advanced Science News

Posted: at 10:56 am

Cell counting is extremely important in research, medicine, and even environmental monitoring where scientists use it to track cell growth, a persons health, or monitor plankton levels in oceans or bacteria in a water sources.

But scientists who have used a hemacytometer, a specialized laboratory device used for manual cell counting, might tell you how challenging it can be to accurately determine cell numbers. This is because the hemacytometer consists of a thick glass slide with a rectangular indentation that creates a counting chamber. The chamber is divided into grids or squares with known dimensions, allowing for accurate cell counting and concentration calculations. It can be quite a challenge to figure out the number of cells in those tiny spaces.

Manual cell counting is a tedious task, explained Yudong Zhang, professor of School of Computing and Mathematical Sciences,University of Leicester in an email. It requires operators to count cells in the small grids of a counting board under a microscope. The grids on the counting board are divided into tiny sections, making it easy to make counting errors. Moreover, performing such a concentration-demanding task for a prolonged period can also have an impact on the operators physical well-being.

Zhang, therefore, wondered whether in this age of AI and automation if something more could be done to alleviate the burden of manual counting methods, which are often time consuming, labor intensive, and susceptible to human error.

Last year, while tutoring my cousin for his high school assignment, I came across a question about using a blood cell counting board to count cells, said the studys co-author, Lijia Deng. It made me curious if there were AI technologies available for this purpose. After conducting a bit of research, I found that there were opportunities to improve existing cell counting methods.

Alongside colleagues, Shuihua Wang and Qinghua Zhou from the same university, the team set out to alleviate the burden of manual counting. To do this, they created an innovative automated detection method powered by AI.

Automated cell counting methods are not completely absent from these fields. However, mainstream instruments are based on the Coulter Principle, which is the detection and measurement of changes in electrical resistance produced by a particle or cell suspended in a conductive liquid, explained Zhang. These instruments do not provide visual feedback, and cell morphology often reflects important information, such as the differences between cancer cells and normal cells.

In a recent study published in Advanced Intelligent Systems, the team unveiled a revolutionary deep learning network they called Spatial-based Super-Resolution Reconstruction Network (SSRNet), which was spearheaded by Deng. This network predicts cell counts and segments cell distribution contours with remarkable precision, said Zhang.

Using this method, the cell sample is captured as an image which is then processed to enhance the clarity of the cells against the images background. The image is then fed to the AI counting system, which generates the cell count and distribution within the image.

This AI-based approach can quickly predict the number and distribution of cells with just a single image, said Zhang. The principle of this method lies in the convolutional neural networks focus on cell features, enabling the prediction of cell count and distribution.

Traditionally, AI uses artificial neural networks computational models inspired by the structure and function of the human brain to perform tasks and learn from encountered situations. Training any neural network model requires rich datasets, added Zhang. And there is a lack of sufficient, annotated datasets in the field of cell imaging.

The team therefore took a different approach to overcome the lack of data needed to train their model, instead using it to predict the overall quantity and distribution regions to accomplish the task of cell counting.

They did this by taking advantage of a concept called upsampling, which is a technique used to increase the resolution or sampling rate of digital data. It involves taking existing digital samples and adding extra samples in between them to create a higher-resolution version of the original data.

The traditional method is to use purely mathematical methods, which introduce new pixel values due to mathematical calculations, explained Deng. Although these new pixels make the image appear clearer, they can affect the prediction of quantity. Our method uses artificial intelligence to predict new pixels, reducing the potential system errors caused by mechanical calculations, improving counting accuracy, and also achieving the performance of traditional methods in clarity.

Its like rolling out the dough after fermentation our approach doesnt introduce new pixels out of thin air; each new pixel is inferred from existing ones, Deng continued. Compared to purely mathematical methods, our approach ensures better consistency between the upscaled image and the original image in terms of features. Additionally, the larger the scaling factor, the more apparent the advantages become.

There was also the added challenge of ensuring their AI system could be used anywhere, even in regions with limited computing resources. To help popularize our AI model and make it available to labs that may lack advanced computing resources, we made our neural network model extremely lightweight so that its running memory read and write consumption is only 1/10 of a traditional AI model.

The innovative features of their AI model will allow it to find application beyond just medicine and biology, promising to unlock new possibilities in various industries. As proof-of-concept, the team demonstrated how this model could be used to count the number of sesame seeds on a piece of bread.

Sesame counting was done just for fun, say the team, it has no practical significance but demonstrates the methods sophistication and speed, which could one day be applied to more advanced applications, including cell counting, among others. For example, we could eventually use aerial photography to remotely capture the breeding population of penguins to understand their population size, which avoids human interference with animals, explained Deng.

This method represents a significant leap forward in the field of cell counting, said Zhang. By leveraging the power of AI and innovative spatial-based super-resolution reconstruction techniques, this approach offers unprecedented precision and efficiency in predicting cell numbers and distributions, which can help fight against infectious diseases.

With its potential, this advancement promises to streamline processes, reduce human error,. As the research continues, further refinements and applications of this AI-powered method are expected to reshape the landscape of cell analysis, ultimately benefiting countless individuals and facilitating scientific progress.

Reference: Lijia Deng, Qinghua Zhou, Shuihua Wang, Yudong Zhang, Spatial-Based Super-resolution Reconstruction: A Deep Learning Network via Spatial-Based Super-resolution Reconstruction for Cell Counting and Segmentation, Advanced Intelligent Systems (2023). DOI: 10.1002/aisy.202300185

Feature image credit: Scott Webb on Unsplash

Read this article:

AI is revolutionizing manual cell counting - Advanced Science News

Posted in Artificial Super Intelligence | Comments Off on AI is revolutionizing manual cell counting – Advanced Science News

2 Warren Buffett Super Stocks to Buy Hand Over Fist in August – The Motley Fool

Posted: at 10:56 am

Born in Omaha, Nebraska on Aug. 30, 1930, Warren Buffett has become one of the most successful investors in history. TheBerkshire Hathaway CEO has led his company on an incredible run of market-beating success and generated fantastic returns for long-term shareholders. He's also been an inspiration to millions of investors around the world and helped popularize wisdom, strategies, and individual investment opportunities that have put people on a path to financial freedom.

If you're looking to follow in Buffett's footsteps, read on to see why two Motley Fool contributors believe that investing in these Berkshire Hathaway portfolio components would be a smart move this month.

Keith Noonan: If you're looking to replicate Buffett's investing moves, it makes sense to own Apple (AAPL 0.06%) in your portfolio. In a stunning vote of confidence, the Oracle of Omaha has made the tech stock Berkshire Hathaway's largest portfolio holding by far. Apple currently accounts for 46.9% of the company's stock portfolio. For comparison, Bank of America is Berkshire's second-largest stock position and makes up 8.9% of its total portfolio.

In recent years, Apple has often ranked as the world's most profitable company or taken second place behind oil giant Saudi Aramco. The tech giant stands as the world's largest company and currently has a market capitalization of nearly $3.1 trillion. Buffett loves companies with strong brands that are capable of serving up consistent profits, and Apple's world-beating business performance has made the stock a big winner for Berkshire Hathaway.

Berkshire first bought Apple shares in the first quarter of 2016, and investing in the tech giant has played a huge role in powering the tech conglomerate to beating the S&P 500 index since then. Berkshire's share price has climbed 171% since the beginning of 2016, handily topping the S&P 500 index's total return of 158% across the stretch.

Meanwhile, Apple stock has delivered a dividend-adjusted total return of 718% across the stretch.

AAPL Total Return Level data by YCharts

In addition to claiming more than 85% of global operating profits from smartphone sales, Apple has been gaining ground in the computers market, growing its software-and-services segment and scoring wins in emerging categories like wearable hardware. With one of the strongest brands in the tech space and a proven penchant for design excellence, the company could also go on to be a big winner in categories like augmented reality, smart cars, and artificial-intelligence-powered personal assistants.

Apple's brand and ecosystem strengths are unmatched in the consumer hardware space, and the company will likely continue to be a technology leader for decades to come.

ParkevTatevosian: One of my favorite Warren Buffett stocks to buy right now is Visa. The financial processing company has spent decades building a network of merchants that accept its cards as a payment method. On the other side of the transaction, there are over 4 billion Visa cards in consumer wallets today. Arguably, the most challenging part of its business plan is behind it.

That can certainly be observed in Visa's financial figures. Revenue grew from $11.8 billion in 2013 to $29.3 billion in 2022. The company benefits from moderate inflation since Visa primarily generates revenue as a percentage of transaction value using a Visa card.

More importantly, Visa has built an extremely profitable business. Its operating income rose from $7.2 billion to $19.7 billion in the abovementioned years. For 2023, Visa earned an astounding 67.1% operating profit margin. That figure is among the highest of any companies in Buffett's Berkshire Hathaway portfolio. It's also the highest of any of the businesses I follow.

V PE Ratio (Forward 1y) data by YCharts

Fortunately, investors don't have to pay an arm and a leg for this incredible stock. Visa is trading at a forward price-to-earnings multiple of under 24.5. Paying a fair price for an excellent business is a reasonable recipe for investment success. I'm not surprised that one of the greatest investors of all time has Visa stock in his portfolio.

When it comes to picking stocks, Warren Buffett famously prefers to keep it simple and avoid companies and situations that are overly complex. Within that mold, Apple and Visa are large, well-known companies that can be counted on to reliably generate significant profits. They might not be the most explosive stocks on the market, but they offer well-balanced risk-reward profiles, and it's not hard to see why Buffett's company has invested in them.

Bank of America is an advertising partner of The Ascent, a Motley Fool company. Keith Noonan has no position in any of the stocks mentioned. Parkev Tatevosian, CFA has positions in Apple and Visa. The Motley Fool has positions in and recommends Apple, Bank of America, Berkshire Hathaway, and Visa. The Motley Fool has a disclosure policy.

Originally posted here:

2 Warren Buffett Super Stocks to Buy Hand Over Fist in August - The Motley Fool

Posted in Artificial Super Intelligence | Comments Off on 2 Warren Buffett Super Stocks to Buy Hand Over Fist in August – The Motley Fool

MQ-9B – A Growing Track Record of Performance in the Maritime … – General Atomics Aeronautical Systems

Posted: at 10:56 am

MQ-9B SeaGuardian on patrol above the North Sea during the 2021 European Maritime Demos.

Unmanned aircraft have been revolutionizing intelligence, military, and so many other applications for years. Now their impact extends to the way navies operate at sea.

Leading the way is the MQ-9B SeaGuardian, which enables the most advanced navies, coast guard agencies, and other maritime authorities to patrol longer, detect more, and make existing units much more effective.

Manufactured by San Diego-based General Atomics Aeronautical Systems, Inc., MQ-9B SeaGuardian has recorded a number of recent first-ever achievements in a range of operational and test environments around the world. Even as users prove out what the system can do as it begins to enter widespread service, theyre only scratching the surface of the ways MQ-9B will help rewrite the practice of sea power.

SeaGuardian has shown it can hunt for and help prosecute submarines. It escorts naval surface task groups. It provides sensing, targeting, and communications for the battle force. It self-deploys from its home station and integrates seamlessly into normal aviation traffic. When the mission is over, it flies back to its home station to get ready for the next assignment.

A substantial track record already is coming into focus.

In just two years time, the aircraft has recorded more than 12,000 operational hours in the service of the Indian Navy.

MQ-9B provided security and surveillance for the recent G7 2023 summit at its island location in Hiroshima, Japan.

And SeaGuardian has joined the U.S. Navy for some of its most complex and challenging integrated exercises including a major anti-submarine warfare exercise this past May in which the aircraft, flown by its crew from a ground control station and operated via satellite, joined with U.S. Navy helicopter squadrons to search for submarines in a range off the coast of Southern California. Helicopter crews flew out from San Diego, dropped their sonobuoys, and then SeaGuardian took over monitoring them. Shortly thereafter, its sensors detected a simulated submarine. This meant other helicopters could deploy to the scene armed with precise data about the targets location and course and then attack.

This U.S. Navy sub-hunting exercise was one of only several such exploits for SeaGuardian. In April, another, known as Group Sail, involved the aircraft partnering with Navy carrier strike groups off the coast of Hawaii, working with warships, aircraft, and other units to ensure the safe passage of the groups surface ships.

Strike Group Integration

As part of Group Sail, carriers, cruisers, and destroyers, as well as F-35 Lightning II fighters, F/A-18E/F Super Hornets, EA-18G Growlers, E-2D Advanced Hawkeyes, MH-60 Seahawks, and P-8 Poseidons worked alongside MQ-9B SeaGuardian, which provided the Navy with maritime domain awareness, information dominance, targeting capability, and more. What the aircraft did, in effect, was serve as the distant eyes and ears for naval commanders.

Its onboard sensors can see all through the visual and infrared spectrum, including with its onboard multi-mode radar through clouds, fog, mist, or smoke. Other onboard systems can hear throughout the radio frequency spectrum, collecting intelligence of all kinds that contributes to the most complete common operating picture possible.

No other large medium-altitude, long-endurance aircraft can contribute to sea power like this and there are even more ways that SeaGuardian contributes. The aircrafts proprietary Detect and Avoid System, invented by GA-ASI, means that it can operate in civil airspace just like any other aircraft. This eliminates the need for special arrangements or human-flown escort aircraft like those that remotely piloted aircraft might have needed in the past.

Also new: SeaGuardian self-deploys to far-flung operating areas. In each of the U.S. Navy maritime exercises described here, the aircraft took off from its home station in California and flew to the base where it was needed. Compare the convenience and simplicity of that kind of operation to the way some unmanned aircraft might have worked in the past being disassembled, packed into a crate, flown in another aircraft, and then reassembled for use on station. SeaGuardian makes all that unnecessary, with a great savings of time, money, and personnel.

Advanced onboard and supporting systems help make all this possible, including automatic takeoff and landing, artificial intelligence and machine learning, and cutting-edge networks. Satellite operations mean that MQ-9Bs pilots and crews can be located anywhere. During a May 2023 Northern Edge exercise around Alaska, for example, the crews flew SeaGuardian from the Pacific Northwest area of the United States at Naval Air Station Whidbey Island.

Such remote operation not only removes human crews from harms way at sea. It means MQ-9B can cover other inhospitable areas, such as the Earths cold ice-covered polar regions, without burdensome hardship deployments for crews or the necessity of also deploying search and rescue teams in case of a mishap. Taking the people off the aircraft protects them and their support units all while reducing cost and complexity.

The big challenges of the 21st century to seafaring nations and the responsible use of the oceans arent simple or easy to tackle. But the good news is that navies, coast guards, and others charged with sea power, maritime domain awareness, search and rescue, and other missions have a tool ready to meet those challenges head-on in the MQ-9B SeaGuardian.

Continue reading here:

MQ-9B - A Growing Track Record of Performance in the Maritime ... - General Atomics Aeronautical Systems

Posted in Artificial Super Intelligence | Comments Off on MQ-9B – A Growing Track Record of Performance in the Maritime … – General Atomics Aeronautical Systems

When Silicon Valley talks about ‘AI alignment’ here’s why they miss … – Startup Daily

Posted: July 19, 2023 at 1:16 pm

As increasingly capable artificial intelligence (AI) systems become widespread, the question of the risks they may pose has taken on new urgency. Governments, researchers and developers have highlighted AI safety.

The EU is moving on AI regulation, the UK is convening an AI safety summit, and Australia is seeking input on supporting safe and responsible AI.

The current wave of interest is an opportunity to address concrete AI safety issues like bias, misuse and labour exploitation. But many in Silicon Valley view safety through the speculative lens of AI alignment, which misses out on the very real harms current AI systems can do to society and the pragmatic ways we can address them.

AI alignment is about trying to make sure the behaviour of AI systems matches what we want and what we expect. Alignment research tends to focus on hypothetical future AI systems, more advanced than todays technology.

Its a challenging problem because its hard to predict how technology will develop, and also because humans arent very good at knowing what we want or agreeing about it.

Nevertheless, there is no shortage of alignment research. There are a host of technical and philosophical proposals with esoteric names such as Cooperative Inverse Reinforcement Learning and Iterated Amplification.

There are two broad schools of thought. In top-down alignment, designers explicitly specify the values and ethical principles for AI to follow (think Asimovs three laws of robotics), while bottom-up efforts try to reverse-engineer human values from data, then build AI systems aligned with those values. There are, of course, difficulties in defining human values, deciding who chooses which values are important, and determining what happens when humans disagree.

OpenAI, the company behind the ChatGPT chatbot and the DALL-E image generator among other products, recently outlined its plans for superalignment. This plan aims to sidestep tricky questions and align a future superintelligent AI by first building a merely human-level AI to help out with alignment research.

But to do this they must first align the alignment-research AI

Advocates of the alignment approach to AI safety say failing to solve AI alignment could lead to huge risks, up to and including the extinction of humanity.

Belief in these risks largely springs from the idea that Artificial General Intelligence (AGI) roughly speaking, an AI system that can do anything a human can could be developed in the near future, and could then keep improving itself without human input. In this narrative, the super-intelligent AI might then annihilate the human race, either intentionally or as a side-effect of some other project.

In much the same way the mere possibility of heaven and hell was enough to convince the philosopher Blaise Pascal to believe in God, the possibility of future super-AGI is enough to convince some groups we should devote all our efforts to solving AI alignment.

There are many philosophical pitfalls with this kind of reasoning. It is also very difficult to make predictions about technology.

Even leaving those concerns aside, alignment (let alone superalignment) is a limited and inadequate way to think about safety and AI systems.

First, the concept of alignment is not well defined. Alignment research typically aims at vague objectives like building provably beneficial systems, or preventing human extinction.

But these goals are quite narrow. A super-intelligent AI could meet them and still do immense harm.

More importantly, AI safety is about more than just machines and software. Like all technology, AI is both technical and social.

Making safe AI will involve addressing a whole range of issues including the political economy of AI development, exploitative labour practices, problems with misappropriated data, and ecological impacts. We also need to be honest about the likely uses of advanced AI (such as pervasive authoritarian surveillance and social manipulation) and who will benefit along the way (entrenched technology companies).

Finally, treating AI alignment as a technical problem puts power in the wrong place. Technologists shouldnt be the ones deciding what risks and which values count.

The rules governing AI systems should be determined by public debate and democratic institutions.

OpenAI is making some efforts in this regard, such as consulting with users in different fields of work during the design of ChatGPT. However, we should be wary of efforts to solve AI safety by merely gathering feedback from a broader pool of people, without allowing space to address bigger questions.

Another problem is a lack of diversity ideological and demographic among alignment researchers. Many have ties to Silicon Valley groups such as effective altruists and rationalists, and there is a lack of representation from women and other marginalised people groups who have historically been the drivers of progress in understanding the harm technology can do.

The impacts of technology on society cant be addressed using technology alone.

The idea of AI alignment positions AI companies as guardians protecting users from rogue AI, rather than the developers of AI systems that may well perpetrate harms. While safe AI is certainly a good objective, approaching this by narrowly focusing on alignment ignores too many pressing and potential harms.

So what is a better way to think about AI safety? As a social and technical problem to be addressed first of all by acknowledging and addressing existing harms.

This isnt to say that alignment research wont be useful, but the framing isnt helpful. And hare-brained schemes like OpenAIs superalignment amount to kicking the meta-ethical can one block down the road, and hoping we dont trip over it later on.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

More:

When Silicon Valley talks about 'AI alignment' here's why they miss ... - Startup Daily

Posted in Artificial Super Intelligence | Comments Off on When Silicon Valley talks about ‘AI alignment’ here’s why they miss … – Startup Daily

How an FEC Deadlock is Hindering the Regulation of AI in Campaigns – Fagen wasanni

Posted: at 1:16 pm

Advances in artificial intelligence (AI) technology have raised concerns about the spread of false information during political campaigns. Unfortunately, a deadlock at the Federal Election Commission (FEC) is preventing a progressive-led effort to put regulations in place.

Democrats in the House and Senate have made a second request to the FEC, asking them to clarify that the law on fraudulent misrepresentation also applies to the use of AI. Initially, a petition led by the consumer advocacy organization Public Citizen was defeated by the FECs three Republican members. Now, Rep. Adam Schiff (D-Calif.) and Sens. Ben Ray Lujn (D-N.M.) and Amy Klobuchar (D-Minn.) are urging the commission to reconsider its decision.

The urgency to address AI in campaigns stems from the technologys increasing advancement and prevalence leading up to the 2024 election. AI can generate realistic images, audio, and video that make it difficult for viewers to discern what is real and what is not. This raises concerns about the dissemination of deepfakes, which could have a significant impact on elections.

Campaigns have already started using AI in various ways. For example, a super PAC supporting Florida Gov. Ron DeSantis (R) used an AI-generated version of former President Trumps voice to narrate a post attacking Iowa Gov. Kim Reynolds (R). Trump himself has also posted AI-generated videos targeting DeSantis, and DeSantiss campaign released an ad featuring seemingly AI-produced images of Trump with Anthony Fauci.

The introduction of AI poses new challenges for campaigns, as it becomes increasingly difficult to detect manipulated content and prevent its widespread dissemination. The FEC has a crucial role to play in addressing these challenges and regulating deceptive AI-produced content.

Public Citizen has submitted a second petition to the FEC, requesting clarification on how the law against fraudulent misrepresentation applies to deceptive AI campaign communications. They emphasize the real-time consequences this issue will have in the upcoming 2024 election and emphasize that regulatory action from the FEC is necessary.

However, Republican FEC Commissioners Allen Dickerson, Sean Cooksey, and James Trainor III have pushed back against the need for regulation. Dickerson stated that the FECs jurisdiction should only cover cases where an agent of one candidate pretends to be the agent of another or where fundraising occurs by fraudulently claiming to represent a campaign.

While there is a deadlock between the three GOP commissioners and the three Democratic commissioners at the FEC, those supporting regulation are hopeful that the commission has the power to clarify rules for AI. Former FEC Commissioner Ann Ravel, an Obama appointee, believes that considering fraudulent misrepresentation falls within the scope of the commissions authority.

Public Citizen is urging the FEC to interpret existing regulations and take action against the use of AI in campaigns. They suggest implementing broader regulations such as requiring watermarks or other forms of identification to identify AI-generated content.

Overall, the deadlock at the FEC is hindering efforts to regulate AI in campaigns. As technology continues to advance, it is crucial to address the potential dangers and vulnerabilities associated with AI-generated content. The FECs involvement is necessary to safeguard the integrity of elections and combat the spread of misinformation.

Follow this link:

How an FEC Deadlock is Hindering the Regulation of AI in Campaigns - Fagen wasanni

Posted in Artificial Super Intelligence | Comments Off on How an FEC Deadlock is Hindering the Regulation of AI in Campaigns – Fagen wasanni

Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News – Forbes

Posted: April 20, 2023 at 11:42 am

(Photo by Justin Sullivan/Getty Images)Getty Images

The worlds wealthiest billionaires are drawing battle lines when it comes to who will control AI, according to Elon Musk in an interview with Tucker Carlson on Fox News, which aired this week.

Musk explained that he cofounded ChatGPT-maker OpenAI in reaction to Google cofounder Larry Pages lack of concern over the danger of AI outsmarting humans.

He said the two were once close friends and that he would often stay at Pages house in Palo Alto where they would talk late into the night about the technology. Page was such a fan of Musks that in Jan. 2015, Google invested $1 billion in SpaceX for a 10% stake with Fidelity Investments. He wants to go to Mars. Thats a worthy goal, Page said in a March 2014 TED Talk .

But Musk was concerned over Googles acquisition of DeepMind in Jan. 2014.

Google and DeepMind together had about three-quarters of all the AI talent in the world. They obviously had a tremendous amount of money and more computers than anyone else. So Im like were in a unipolar world where theres just one company that has close to a monopoly on AI talent and computers, Musk said. And the person in charge doesnt seem to care about safety. This is not good.

Musk said he felt Page was seeking to build a digital super intelligence, a digital god.

He's made many public statements over the years that the whole goal of Google is what's called AGI artificial general intelligence or artificial super intelligence, Musk said.

Google CEO Sundar Pichai has not disagreed. In his 60 minutes interview on Sunday, while speaking about the companys advancements in AI, Pichai said that Google Search was only one to two percent of what Google can do. The company has been teasing a number of new AI products its planning on rolling out at its developer conference Google I/O on May 10.

Musk said Page stopped talking to him over OpenAI, a nonprofit with the stated mission of ensuring that artificial general intelligenceAI systems that are generally smarter than humansbenefits all of humanity that Musk cofounded in Dec. 2015 with Y Combinator CEO Sam Altman and PayPal alums LinkedIn cofounder Reid Hoffman and Palantir cofounder Peter Thiel, among others.

I havent spoken to Larry Page in a few years because he got very upset with me over OpenAI, said Musk explaining that when OpenAI was created it shifted things from a unipolar world where Google controls most of the worlds AI talent to a bipolar world. And now it seems that OpenAI is ahead, he said.

But even before OpenAI, as SpaceX was announcing the Google investment in late Jan. 2015, Musk had given $10 million to the Future of Life Institute, a nonprofit organization dedicated to reducing existential risks from advanced artificial intelligence. That organization was founded in March 2014 by AI scientists from DeepMind, MIT, Tufts, UCSC, among others and were the ones who issued the petition calling for a pause in AI development that Musk signed last month.

In 2018, citing potential conflicts with his work with Tesla, Musk resigned his seat on the board of OpenAI.

I put a lot of effort into creating this organization to serve as a counterweight to Google and then I kind of took my eye off the ball and now they are closed source, and obviously for profit, and theyre closely allied with Microsoft. In effect, Microsoft has a very strong say, if not directly controls OpenAI at this point, Musk said.

Ironically, its Musks longtime friend Hoffman who is the link to Microsoft. The two hit it big together at PayPal and it was Musk who recruited Hoffman to OpenAI in 2015. In 2017, Hoffman became an independent director at Microsoft, then sold LinkedIn to Microsoft for more than $26 billion in 2019 when Microsoft invested its first billion dollars into OpenAI. Microsoft is currently OpenAIs biggest backer having invested as much as $10 billion more this past January. Hoffman only recently stepped down from OpenAIs board on March 3 to enable him to start investing in the OpenAI startup ecosystem, he said in a LinkedIn post. Hoffman is a partner in the venture capital firm Greylock Partners and a prolific angel investor.

All sit at the top of the Forbes Real-Time Billionaires List. As of April 17 5pm ET, Musk was the worlds second richest person valued at $187.4 billion, Page the eleventh at $90.1 billion. Google cofounder Sergey Brin is in the 12 spot at $86.3 billion. Thiel ranks 677 with a net worth of $4.3 billion and Hoffman ranks 1570 with a net worth of $2 billion.

Musk said he thinks Page believes all consciousness should be treated equally while he disagrees, especially if the digital consciousness decides to curtail the biological intelligence. Like Pichai, Musk is advocating for government regulation of the technology and says at minimum there should be a physical off switch to cut power and connectivity to server farms in case administrative passwords stop working.

Pretty sure Ive seen that movie.

Musk told Carlson that hes considering naming his new AI company TruthGPT.

I will create a third option, although it's starting very late in the game, he said. Can it be done? I don't know.

The entire interview will be available to view on Fox Nation starting April 19 7am ET. Here are some excerpts which includes his thoughts on encrypting Twitter DMs.

Tech and trending reporter with bylines in Bloomberg, Businessweek, Fortune, Fast Company, Insider, TechCrunch and TIME; syndicated in leading publications around the world. Fox 5 DC commentator on consumer trends. Winner CES 2020 Media Trailblazer award. Follow on Twitter @contentnow.

See the original post here:

Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News - Forbes

Posted in Artificial Super Intelligence | Comments Off on Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News – Forbes

Page 112