The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: May 21, 2022
AI Weekly: Is AI alien invasion imminent? – VentureBeat
Posted: May 21, 2022 at 6:48 pm
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Want AI Weekly for free each Thursday in your inbox? Sign up here.
Is an AI alien invasion headed for earth? The VentureBeat editorial staff marveled at the possibility this week, thanks to the massive online traffic earned by one Data Decision Makers community article, with its impossible-to-ignore title, Prepare for arrival: Tech pioneer warns of alien invasion.
The column, written by Louis Rosenberg, founder of Unanimous AI, was certainly buoyed not only by its SEO-friendly title, but its breathless opener: An alien species is headed for planet Earth and we have no reason to believe it will be friendly. Some experts predict it will get here within 30 years, while others insist it will arrive far sooner. Nobody knows what it will look like, but it will share two key traits with us humans it will be intelligent and self-aware.
But a fuller read reveals Rosenbergs focus on some of todays hottest AI debates, including the potential for AGI in our lifetimes and why organizations need to prepare with AI ethics: while theres an earnest effort in the AI community to push for safe technologies, theres also a lack of urgency. Thats because too many of us wrongly believe that a sentient AI created by humanity will somehow be a branch of the human tree, like a digital descendant that shares a very human core. This is wishful thinking the time to prepare is now.
Coincidentally, this past week was filled with claims, counterclaims and critiques of claims around the potential to realize AGI anytime soon.
Last Friday, Nando De Freitas, a lead researcher at Googles DeepMind AI division, tweeted that The Game is Over! in the decades-long quest for AGI, after DeepMind unveiled its new Gato AI, which is capable of complex tasks ranging from stacking blocks to writing poetry.
According to De Freitas, Gato AI simply needs to be scaled up in order to create an AI that rivals human intelligence. Or, as he wrote on Twitter, Its all about scale now! Its all about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, innovative data, on/offline Solving these challenges is what will deliver AGI.
Plenty of experts are pushing back on De Freitas claims and those of others insisting that AGI or its equivalent is at hand.
Yann LeCun, the French computer scientist who is chief AI scientist at Meta, had this to say (on Facebook, of course):
About the raging debate regarding the significance of recent progress in AI, it may be useful to (re)state a few obvious facts:
(0) there is no such thing as AGI. Reaching Human Level AI may be a useful goal, but even humans are specialized.
(1) the research community is making some progress towards HLAI
(2) scaling up helps. Its necessary but not sufficient, because.
(3) we are still missing some fundamental concepts
(4) some of those new concepts are possibly around the corner (e.g. generalized self-supervised learning)
(5) but we dont know how many such new concepts are needed. We just see the most obvious ones.
(6) hence, we cant predict how long its going to take to reach HLAI.
Meanwhile, Gary Marcus, founder of Robust.AI and author of Rebooting AI, added to the debate on his new Substack, with its first post dedicated to the discussion of current efforts to develop AGI (including Gato AI), which he calls alt intelligence:
Right now, the predominant strand of work within Alt Intelligence is the idea of scaling. The notion that the bigger the system, the closer we come to true intelligence, maybe even consciousness.
There is nothing new, per se, about studying Alt Intelligence, but the hubris associated with it is. Ive seen signs for a while, in the dismissiveness with which the current AI superstars,= and indeed vast segments of the whole field of AI, treat human cognition, ignoring and even ridiculing scholars in such fields as linguistics, cognitive psychology, anthropology and philosophy.
But this morning I woke to a new reification, a Twitter thread that expresses, out loud, the Alt Intelligence creed, from Nando de Freitas, a brilliant high-level executive at DeepMind, Alphabets rightly-venerated AI wing, in a declaration that AI is all about scale now.
Marcus closes by saying:
Let us all encourage a field that is open-minded enough to work in multiple directions, without prematurely dismissing ideas that happen to be not yet fully developed. It may just be that the best path to artificial (general) intelligence isnt through Alt Intelligence, after all.
As I have written, I am fine with thinking of Gato as an Alt Intelligence an interesting exploration in alternative ways to build intelligence but we need to take it in context: it doesnt work like the brain, it doesnt learn like a child, it doesnt understand language, it doesnt align with human values and it cant be trusted with mission-critical tasks.
It may well be better than anything else we currently have, but the fact that it still doesnt really work, even after all the immense investments that have been made in it, should give us pause.
Its nice to know that most experts dont believe the AGI alien invasion will arrive anytime soon.
But the fierce debate around AI and its ability to develop human-level intelligence will certainly continue on social media and off.
Let me know your thoughts!
Sharon Goldman, senior editor and writer
Twitter: @sharongoldman
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.
The rest is here:
Posted in Ai
Comments Off on AI Weekly: Is AI alien invasion imminent? – VentureBeat
AI for All: Experts Weigh In on Expanding AI’s Shared Prosperity and Reducing Potential Harms – uschamber.com
Posted: at 6:48 pm
Policymakers, technologists, and business leaders must work together to ensure that the prosperity from artificial intelligence is shared throughout society and the unintended harms are addressed and mitigated, said experts at the U.S. Chamber AI Commission field hearing in Palo Alto, CA.
Were seeing a growth in AI systems that can function across multiple domains for the last decade...this can lead to unanticipated and harmful outcomes, said Rep. Anna Eshoo (CA-18), kicking off the hearing with words of caution. Policymakers, researchers, and leaders in the private sector need to collaborate to address these issues to ensure that AI advancement accrues to the benefit of society, not at the cost of it.
She added, As AI becomes more powerful, we have to keep refocusing technological development on our values to ensure that technology improves society. Many experts testifying throughout the hearing echoed similar points, advocating for widening the shared prosperity that would result from AI and cautioning the Commission on AIs potential harm to workers and marginalized communities.
1/2Rep. Anna Eshoo (CA-18) and Rep. Ro Khanna (CA-17) provided remarks at the U.S. Chamber AI Commission field hearing in Palo Alto, CA, on May 9, 2022.
2/2Rep. Anna Eshoo (CA-18) and Rep. Ro Khanna (CA-17) provided remarks at the U.S. Chamber AI Commission field hearing in Palo Alto, CA, on May 9, 2022.
Erik Brynjolfsson, Senior Fellow at the Stanford Institute for Human-Centered AI (HAI) and Director of the Stanford Digital Economy Lab, articulated the difference between automation and augmentation when it comes to jobs: Economists have made a distinction between economic substitute and economic complement, he testified. Substitutes tend to worsen economic inequality and increase concentration of economic and political power.
Moreover, he stressed that Most of the progress over time has come not from automating things we are already doing, but from doing new things...When technology complements humans...it increases wages and leads to more widely shared prosperity.
Katya Klinova, Head of Al, Labor and the Economy at The Partnership on AI, also advocated for the path of AI augmenting and complementing the skills of a much broader group of workers, making them more valuable for the labor market, boosting their wages, improving economic inclusion, and ultimately creating a more competitive economy, she said.
The regular discourse is overwhelmingly focused on how workers should prepare for the age of AI, and how governments and institutions can help them to prepare, Klinova testified. By putting all the burden of adjustment on the workers and the government, we are forgetting that the technology too can and should adjust to the needs and realities faced by communities and the workforce.
However, The issue is that in practice, it is often quite difficult to tell apart worker-augmenting technologies from worker-replacing technologies. Because of that, Klinova asserted, any company today that wants to claim their technology augments workers can just do it. Its a free-for-all claim that is not necessarily substantiated by anything.
Alka Roy, Founder of the Responsible Innovation Project and RI Labs, underscored a trust gap that results from this kind of discrepancy between having best practices, audits, and governance, and how and where they are actually used. Some reports...cite that even companies that have AI principles and ethics, only 9% to 20% of them publicly admit to having operationalizing these principles, Roy said.
To address these issues, Klinova advocated for invest[ing] in alternative benchmarks...and in building institutions that allow for empowered participation of workers in the development and deployment of AI. Adding that, Workers are ultimately the best people to tell apart which technologies help them and make their day better, and which ones look good on paper in marketing materials, but in practice enable exploitation or over surveillance.
In talking about the impact of AI on workers, Doug Bloch, Political Director at Teamsters Joint Council 7, referenced his time serving on Governor Newsoms Future of Work Commission, I became convinced that all the talk of the robot apocalypse and robots coming to take workers jobs was a lot of hyperbole. I think the bigger threat to the workers I represent is the robots will come and supervise through algorithms and artificial intelligence.
We have to empower workers to not only question the role of technology in the workplace, but also to use tools such as collective bargaining and government regulation to make sure that workers also benefit from its deployment, he said.
In his testimony, Bloch emphasized that workers arent afraid of technology, but they will question its purpose and make sure that its regulated, and that workers have a voice in the process. The biggest question for organized labor and worker advocates right now...is how does all of this technology relate to production standards, to production, and to discipline?
Bloch referenced an existing contract to show how AI and labor may co-exist. Terms provided a safety net for workers by ensuring that they cant be fired by surveillance technology or an algorithm. A supervisor has to directly observe dishonest behavior to allow a firing. He also underlined the importance that the data workers generate, which helps to inform decisions and increase profits for the company, won't be used against them.
Bloch closed by stating, If the fight of the last century was for workers to have unions and protections like OSHA, I honestly believe that the fight of this century for workers will be around data, and that workers should have a say in what happens with it and to share in the profit with it.
Jacob Snow, Staff Attorney for the Technology and Civil Liberties Program at the ACLU of Northern California, told the Commission that the critical discussions on AI are, not narrow technical questions about how to design a product. They are social questions about what happens when a product is deployed to a society, and the consequences of that deployment on peoples lives.
He explained why he believed facial recognition should be on the other side of the technological red line: There are applications of facial recognition, which I think at least officially seem like they might be valuable finding a missing person or tracking down a dangerous criminal, for example. But...any tool that can find a missing person, can find a political dissident. Any tool that can pick a criminal out of a crowd, can do same for an undocumented person or a person who has received reproductive healthcare. He cautioned, Were living in a time when its not necessary for civil rights and privacy advocates to say just imagine if the technology fell into the wrong hands. Its going directly into the wrong hands after its been built.
We can think a little bit more broadly about what constitutes AI regulation worker protections, housing support, private laws all those frameworks put in place deeper social, health-related, and economic protections that limit the harm about algorithms, Snow testified.
Rep. Ro Khanna (CA-17), who provided concluding remarks, talked about the disparate impacts that AI will have in different communities across the United States. This challenge is the central challenge for the country: How do we both create economic opportunity in places that have been totally left out, how do we build and revitalize a new middle class, and how do we have the benefits of technology be more widely shared? In summary, the Congressman stated, There's going to be 25 million of these new jobs in every field from manufacturing to farming to retail to entertainment. The question is, how do we make sure that they are a possibility for people in every community?"
To continue exploring critical issues around AI, the U.S. Chamber AI Commission will host further field hearings in the U.S. and abroad to hear from experts on a range of topics. The next hearing will be held in London, UK, on June 13. Previous hearings took place in Austin, TX, and Cleveland, OH.
Learn more about the AI Commission here.
Director, Policy, U.S. Chamber of Commerce Technology Engagement Center (C_TEC)
Read more:
Posted in Ai
Comments Off on AI for All: Experts Weigh In on Expanding AI’s Shared Prosperity and Reducing Potential Harms – uschamber.com
How AI is improving the web for the visually impaired – VentureBeat
Posted: at 6:48 pm
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
There are almost 350 million people worldwide with blindness or some other form of visual impairment who need to use the internet and mobile apps just like anyone else. Yet, they can only do so if websites and mobile apps are built with accessibility in mind and not as an afterthought.
Consider these two sample buttons that you might find on a web page or mobile app. Each has a simple background, so they seem similar.
In fact, theyre a world apart when it comes to accessibility.
Its a question of contrast. The text on the light blue button has low contrast, so for someone with visual impairment like color blindness or Stargardt disease, the word Hello could be completely invisible. It turns out that there is a standard mathematical formula that defines the proper relationship between the color of text and its background. Good designers know about this and use online calculators to calculate those ratios for any element in a design.
So far, so good. But when it comes to text on a complex background like an image or a gradient, things start to get complicated and helpful tools are rare. Before today, accessibility testers have had to check these cases manually by sampling the background of the text at certain points and calculating the contrast ratio for each of the samples. Besides being laborious, the measurement is also inherently subjective, since different testers might sample different points inside the same area and come up with different measurements. This problem laborious, subjective measurements has been holding back digital accessibility efforts for years.
Artificial intelligence algorithms, it turns out, can be trained to solve problems like this and even to improve automatically as they are exposed to more data.
For example, AI can be trained to do text summarization, which is helpful for users with cognitive impairments; or to do image and facial recognition, which helps those with visual impairments; or real-time captioning, which helps those with hearing impairment. Apples VoiceOver integration on the iPhone, whose main usage is to pronounce email or text messages, also uses AI to describe app icons and report battery levels.
Wise companies are rushing to comply with the Americans with Disabilities Act (ADA) and give everyone equal access to technology. In our experience, the right technology tools can help make that much easier, even for todays modern websites with their thousands of components. For example, a sites design can be scanned and analyzed via machine learning. It can then improve its accessibility through facial & speech recognition, keyboard navigation, audio translation of descriptions and even dynamic readjustments of image elements.
In our work, weve found three guiding principles that, I believe, are critical for digital accessibility. Ill illustrate them here with reference to how our team, in an effort led by our data science team leader Asya Frumkin, has solved the problem of text on complex backgrounds.
If we look at the text in the image below we see that there is some kind of legibility problem, but its hard to quantify overall, looking solely at the whole phrase. On the other hand, if our algorithm examines each of the letters in the phrase separately for example, the e on the left and the o on the right we can more easily tell for each of them whether it is legible or not.
If our algorithm continues to go through all the characters in the text in this way, we can count the number of legible characters in the text and the total number of characters. In our case, there are four legible characters out of eight in total. The ensuing fraction, with the number of legible characters as the numerator, gives us a legibility ratio for the overall text. We can then use an agreed-upon pre-set threshold, for example, 0.6, below which the text is considered unreadable. But the point is we got there by running operations on each piece of the text and then tallying from there.
We all remember Optical Character Recognition (OCR) from the 1970s and 80s. Those tools had promise but ended up being too complex for their originally intended purpose.
But there was a part of those tools called The CRAFT (Character-Region Awareness For Text) model that held out promise for AI and accessibility. CRAFT maps each pixel in the image to its probability of being in the center of a letter. Based on this calculation, it is possible to produce a heat map in which high probability areas will be painted in red and areas with low probability will be painted in blue. From this heat map, you can calculate the bounding boxes of the characters and cut them out of the image. Using this tool, we can extract individual characters from long text and run a binary classification model (like in #1 above) on each of them.
The model of the problem classifies individual characters in a straightforward binary way at least in theory. In practice, there will always be challenging real-world examples that are difficult to quantify. What complicates the matter, even more, is the fact that every person, whether they are visually impaired or not, has a different perception of what is legible.
Here, one solution (and the one we have taken) is to enrich the dataset by adding objective tags to each element. For example, each image can be stamped with a reference piece of text on a fixed background prior to analysis. That way, when the algorithm runs, it will have an objective basis for comparison.
As the world continues to evolve, every website and mobile application needs to be built with accessibility in mind from the beginning. AI for accessibility is a technological capability, an opportunity to get off the sidelines and engage and a chance to build a world where peoples difficulties are understood and considered. In our view, the solution to inaccessible technology is simply better technology. That way, making websites and apps accessible is part and parcel of making websites and apps that work but this time, for everybody.
Navin Thadani is cofounder and CEO of Evinced.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even considercontributing an articleof your own!
Read More From DataDecisionMakers
Go here to read the rest:
How AI is improving the web for the visually impaired - VentureBeat
Posted in Ai
Comments Off on How AI is improving the web for the visually impaired – VentureBeat
Can Administrators Ensure the Ethical Use of AI in K12 Education? – EdTech Magazine: Focus on K-12
Posted: at 6:48 pm
As with many technologies used in K12 learning environments, school leaders must guarantee that artificial intelligence is safe. In addition to the legal requirements placed on districts, there are ethical issues schools must consider before introducing new tech powered by AI and machine learning (ML).
To do this, there must first be an understanding of what AI looks like in education. All of us have very different ideas for AI and algorithmic tools and machine learning and what these things are, sayssava saheli singh, an eQuality-Scotiabank postdoctoral fellow in AI and surveillance at the University of Ottawa.
The concepts of AI and ML are nebulous, meaning that everyone understands them a bit differently. The internet, and companies selling AI-powered products, can sway peoples understanding of this technology.
For school leaders to properly protect students using this tech, they must understand what AI and ML mean in the context of the products theyre adopting. IT administrators should also understand how the AI is using data and how the tools affect different student populations.
Click the bannerto learn more about trending tech when you register as an Insider.
The first consideration leaders must make is whether they understand the tools in which theyre investing. IT departments should know what the tools are doing behind the scenes to produce the results they see.
Theres definitely a lack of understanding about what these systems are, how theyre implemented, who theyre for and what theyre used for, singh says.
In understanding how AI works, IT leaders must remember that these systems rely on data.
AI only works if you grab data. It only works if you grab data from everywhere you can find it, and the more data the better, saysValerie Steeves, a professor in the department of criminology at the University of Ottawa and principal investigator of theeQualityproject.
Because AI and ML tools rely on data, admins must ensure theyre building in student data protections to use this technology ethically. AI thats rolling out now always comes with a price tag, and the price tag is your students data, Steeves says.
KEEP READING:Data analytics shows the impact of educational technology.
Another ethical consideration is algorithmic bias in AI and ML technologies. When purchasing these solutions, its important to remember that these programs are operating off data sets that frequently contain bias.
Theres a lot of bias involved in applying some of these tools, singh says. A lot of these tools are made by specific people and with specific populations in mind. At a basic level, theres racial, gender, sexual orientation differences theres a lot of different kinds of people and a lot of these technologies either leave them out or include them in ways that are really harmful.
While the creators of these tools typically arent intending to cause harm, the biases built into the data sets can discriminate against certain student populations.
It creates a system where biases can play out in in ways that are rampant, and it becomes ever more difficult to pull them back because the bias and the discriminatory use of the data is built into the algorithm, Steeves says.
One way to account for this bias is to acknowledge it. Teaching students to interact with AI should include lessons about how it was created and where these biases may appear.
The necessary ethical considerations shouldnt dissuade school leaders from using AI in education entirely. Teaching students to interact with this technology will set them up for success in higher education and future careers. There are also safe ways to use AI.
A lot of the context in which algorithms and AI and machine learning are useful is when youre looking at a large corpus of data and trying to make sense of it, singh says. Using algorithms and AI to answer a specific question can maybe give you a clue as to what the larger context might be. In that educational context, I think its a useful tool.
When students are inputting data, instead of being the subjects from whom data is extracted, AI can be extremely beneficial.
AI is most useful when no data is collected from the kids, and the kids are not embedded into some kind of surveillance system, Steeves says. AI is just helping them facilitate their learning and their modeling of the world around them.
DIVE DEEPER:Learn how to use artificial intelligence in K12 education.
View original post here:
Can Administrators Ensure the Ethical Use of AI in K12 Education? - EdTech Magazine: Focus on K-12
Posted in Ai
Comments Off on Can Administrators Ensure the Ethical Use of AI in K12 Education? – EdTech Magazine: Focus on K-12
Everlaw Offers Half-Day Symposium on AI-Driven Ediscovery in Chicago, NYC and LA – PR Newswire
Posted: at 6:48 pm
Two CLE-Eligible Sessions Offered at In-Person Events from June 1 to 15; Designed to Deliver Tips and Insights for Ediscovery Success in the New AI World
OAKLAND, Calif., May 20, 2022 /PRNewswire/ -- Everlaw, the cloud-native investigation and litigation platform, today announced the kickoff of its Connect events, an engaging, in-person symposium for legal professionals hosted jointly with EDRM. The half-day events pack in networking opportunities, CLE-eligible panels, great food and cocktails, sneak peeks of the latest in industry-leading artificial intelligence (AI) solutions for ediscovery, and a special deep dive on trends in technology assisted review (TAR) with AI and ediscovery expert Dr. Maura R. Grossman.
Register here for Everlaw Connect.
"Today's legal professions who understand and wield the power of machine learning and AI for ediscovery are already reaping a competitive advantage in their cases and careers," said Chuck Kellner, ediscovery pioneer and Everlaw strategy leader. "Fluency in emerging tech is now a must-have for client and professional success. Our Connect events are designed to deliver a quick leg up on the rapidly evolving world of AI discovery with tech demos, tools and advice from both peers and world-leading experts. Bring your growth mindset and curiosity, and you'll be richly rewarded."
The Connect events half-day sessions run from noon to 6:30 pm local times. They are held at:
Everlaw and EDRM designed this symposium for legal professionals who want to return to valuable in-person peer networking and educational opportunities, with great food, drink and company.
Everlaw will host CLE-eligible panels on:
Special featured speakers included are:
Day 2: On June 2 in Chicago, June 9 in NYC and June 16 in LA, the Everlaw User Education team welcomes current customers for an additional half-day of interactive training experience. The experience will focus on how users can save time and money by providing guidance on how to incorporate AI into their review process and how to be more efficient throughout the life of a case on Everlaw. Users will have the chance to work together to share insight into how they have tackled common problems and will receive guidance from the experts from Everlaw's User Education team on how to deliver the best results using the Everlaw platform.
Register here for Customer Education
About EverlawEverlaw blends cutting-edge technology with modern design to help government entities, law firms and corporations solve the toughest problems in the legal industry. Everlaw is used by Fortune 100 corporate counsels and household brands like Hilton and Dick's Sporting Goods, 91 out of the AM Law 200 and all 50 U.S. state attorneys general. Based in Oakland, California, Everlaw is funded by top-tier investors, including Andreessen Horowitz, CapitalG, H.I.G. Growth Partners, K9 Ventures, Menlo Ventures, and TPG Growth.
Learn more at https://www.everlaw.com.
Media Contact:Colleen Haikes[emailprotected]
SOURCE Everlaw
Link:
Everlaw Offers Half-Day Symposium on AI-Driven Ediscovery in Chicago, NYC and LA - PR Newswire
Posted in Ai
Comments Off on Everlaw Offers Half-Day Symposium on AI-Driven Ediscovery in Chicago, NYC and LA – PR Newswire
Cathie Wood on AI, crypto, inflation and investment strategy – St Pete Catalyst
Posted: at 6:48 pm
Cathie Wood commands a large and loyal following in the investment and tech world due to her unabashed belief in disruptive innovation and unique insights into the financial markets.
Woods distinctive views were on full display during Thursdays poweredUp Tampa Bay Tech Festival at the Mahaffey Theater when the ARK Invest CEO sat down with Joe Hamilton, publisher of the St. Pete Catalyst and head of network for Metacity, for an enlightening fireside chat.
Before offering her thoughts on innovative technology, inflation and cryptocurrency, Hamilton asked Wood to relay a story he never gets tired of hearing why she relocated her investment firm from New York to St. Pete.
After we did look at a lot of both no-tax and low-tax states, we were drawn to Florida specifically, the Tampa Bay region because of its vibrancy and because it did seem to us like it is going to be the next Austin, said Wood. And I do mean the Tampa Bay region because I dont think St. Pete can do it all by itself.
Artificial Intelligence
Wood called breakthroughs in artificial intelligence mind-blowing. The investment icon said advancements in the technology are happening faster than she expected, despite ARK pushing the envelope on AIs potential.
Wood said artificial intelligence training costs dropped 60%, combining both hardware and software. She then dropped her first bombshell idea, offering her latest take on the technology that she said was hot off the research press.
This isnt even in our Big Ideas 2022, said Wood. We believe we are six to 13 years away from the Holy Grail artificial general intelligence. At which point we will see an inflection point in productivity, and its going to be pretty mind-blowing.
An artificial general intelligence (AGI) is a machine capable of understanding the world like any human and with the same capacity to complete a wide range of tasks.
Cryptocurrency
Hamilton asked Wood for her thoughts on last weeks $40 billion collapse of the popular crypto token Luna and its associated terraUSD stablecoin (UST). Terra was the first major attempt at an algorithmic stable coin meant to retain a $1 value at all times.
Wood replied that she participated in a podcast with the founder of Terra about 18 months ago, and after relistening to the conversation, she realized it was a Ponzi scheme.
Wood noted that the number of Luna tokens in circulation went from around 300 million to 6.5 trillion in just six weeks.
They adopted the U.S. (monetary) policy on that, said Hamilton, eliciting laughs from Wood and the audience. Well, you make a good point on that, she replied.
Still, Wood remains extremely bullish on Bitcoin. She called the apex token, designed to stop minting once 21 million units are in circulation, a good store of value. She said the cryptocurrency uniquely and positively impacts emerging markets and provides an insurance policy against the confiscation of wealth in several ways, with inflation the most pervasive.
So, were out there saying, and you can find this in our Big Ideas 2022 on our website, a $1 million target, said Wood. Its actually $1.3 (million). So, this is a money revolution, and we do think it is going to spread through the world
Inflation
Hamilton noted that Wood has previously stated that innovation is one of the strongest hedges for inflation due to its cost-reducing effects. Wood called the last year the most difficult time of her career in terms of inflation and deflation, even worse than the Great Recession of 2008-09.
She blamed the mainstream narrative of inflation and interest rates for increasing the discount factors used to present the value of future cash flows. As interest rates rise, the present value of future cash flows drops, which Wood said disproportionately affected her investment strategy.
We have been fighting this notion that inflation has now reached a point where its embedded in the system and is going to move into wages, said Wood. I know the wage rates here we were talking backstage have skyrocketed, but I think this is a special situation.
Wood said she is fighting the narrative that America is in a situation similar to the 1970s, where inflation embeds in the system, and people should get used to it and operate around it. She said the market holds significant deflationary forces, which she believes is the greater risk.
The risk is you stop innovating and lowering costs, said Wood.
Investment strategy
Wood said that while many people expected an apocalyptic scenario for her company over the last year, with nothing but outflows from her investment funds, ARK netted $17 billion in 2021 and continues to receive strong inflows. She explained that her young team of analysts remains focused on a five-year outlook and recently concentrated ARKs portfolios on its highest conviction names.
Wood said her flagship strategy, the ARK Innovation ETF(ARKK), went from holding 58 companies to 35. Contrary to most investment strategies, she explained concentration is a risk-control measure for ARK. She said that many of the stocks ARK dropped lost visionary managers willing to stand up to short-term-oriented shareholders.
Its so interesting to watch our portfolios almost become negatively correlated to the stock market, said Wood. So, thats been the story of the past few years.
Hamilton noted the five-year outlook mimics the mindset of a venture capitalist, and Wood said she started her company by stating that ARK is the closest youll find to a venture capital company in the public equity markets.
Wood added that ARK offers all its research to the public, feeding her clients with transparency not found anywhere else. She said the investment firm would continue to publish its models because the traditional financial world underserves innovation investors.
Wood said she becomes concerned when a traditional company buys an innovator, stating that NewsCorp bought the once-popular social media platform MySpace and killed it with advertising.
Wood encourages the companies she invests in to spend aggressively now and capitalize on massive growth opportunities. She believes that ARKs focus truly disruptive innovation is valued at roughly $7-8 trillion, less than 10% of the global equity markets.
By 2030, we believe that $7-8 trillion will be $210 trillion and that it will dominate the benchmarks out there, said Wood. In other words, theres going to be a lot of disruption.
Tesla, we were beating the drum on that, and nobody believed us. Nobody believed us, and boom they scaled.
See the original post:
Cathie Wood on AI, crypto, inflation and investment strategy - St Pete Catalyst
Posted in Ai
Comments Off on Cathie Wood on AI, crypto, inflation and investment strategy – St Pete Catalyst
Why AI and autonomous response are crucial for cybersecurity (VB On-Demand) – VentureBeat
Posted: at 6:48 pm
Presented by Darktrace
Today, cybersecurity is in a state of continuous growth and improvement. In this on-demand webinar, learn how two organizations use a continuous AI feedback loop to identify vulnerabilities, harden defenses and improve the outcomes of their cybersecurity programs.
Watch free on-demand here.
The security risk landscape is in tremendous flux, and the traditional on-premises approach to cybersecurity is no longer enough. Remote work has become the norm, and outside the office walls, employees are letting down their personal security defenses. Cyber risks introduced by the supply chain via third parties are still a major vulnerability, so organizations need to think about not only their defenses but those of their suppliers to protect their priority assets and information from infiltration and exploitation.
And thats not all. The ongoing Russia-Ukraine conflict has provided more opportunities for attackers, and social engineering attacks have ramped up tenfold and become increasingly sophisticated and targeted. Both play into the fears and uncertainties of the general population. Many security industry experts have warned about future threat actors leveraging AI to launch cyber-attacks, using intelligence to optimize routes and hasten their attacks throughout an organizations digital infrastructure.
In the modern security climate, organizations must accept that it is highly likely that attackers could breach their perimeter defenses, says Steve Lorimer, group privacy and information security officer at Hexagon. Organizations must focus on improving their security posture and preventing business disruption, so-called cyber resilience. You dont have to win every battle, but you must win the important ones.
ISOs need to look for cybersecurity options that alleviate some resource challenges, add value to their team, and reduce response time. Self-learning AI trains itself using unlabeled data. Autonomous response is a technology that calculates the best action to take to contain in-progress attacks at machine speed, preventing attacks from spreading throughout the business and interrupting crucial operations. And both are becoming essential for a security program to address these challenges.
Attackers are constantly innovating, transforming old attack patterns into new ones. Self-learning AI can detect when something in an organizations digital infrastructure changes, identify behaviors or patterns that havent been seen previously, and act to quarantine the potential threat before it can escalate into a full-blown crisis, disrupting business.
Its about building layers at the end of the day, Lorimer adds. AI will always be a supporting element, not a replacement for human teams and knowledge. AI can empower human teams and decrease the burden. But we can never entirely rely on machines; you need the human element to make gut feeling decisions and emotional reactions to influence more significant business decisions.
Often, cyber attacks start slowly; many take months to move between reconnaissance and penetration, but the most important components of an attack happen very quickly. Autonomous response unlocks the ability to react at machine speed to identify and contain threats in that short window.
The second key advantage of autonomous response is that it enables always-on defense. Even with the best intentions in the world, security teams will always be constrained by resources. There arent enough people to defend everything all the time. Organizations need a layer that can augment the human team, providing them time to think and respond with crucial human context, like business and strategy acumen. Autonomous response capabilities allow the AI to make decisions instantaneously. These micro-decisions give human teams enough time to make those macro-decisions.
Once an organization has matured its thinking to the point of assumed breach, the next question is understanding how attackers traverse the network, Lorimer says. Now, AI can help businesses better understand their own systems and identify the most high-risk paths an attacker might take to reach their crown jewels or most important information and assets.
This attack simulation allows them to harden defenses around their most vulnerable areas, Lorimer says. And self-learning AI is really all about a paradigm shift: instead of building up defenses based on historical attack data, you need to be able to defend against novel threats.
Attack path modeling (APM) is a revolutionary technology because it allows organizations to map the paths where security teams may not have as much visibility or may not have originally thought of as vulnerable. The network is never static; a large, modern, and innovative enterprise constantly changes. So, APM can run continuously and alert teams of new attack paths created via new integrations with a third party or a new device joining the digital infrastructure.
This continuous, AI-based approach allows organizations to harden their defenses continually, rather than relying on biannual, or even more infrequent, red teaming exercises, Lorimer says. APM enables organizations to remediate vulnerabilities in the network proactively.
When choosing a cybersecurity solution, there are a few things ISOs need to look for, Lorimer says. First, the solution should augment the human teams without creating substantial additional work. The technologies should be able to increase the value that an organization delivers.
ISOs should also look to repair any significant overlaps or gaps in technology in their existing security stacks. Todays solutions can replace much of the existing stack with better, faster, more optimized, more automated and technology-led approaches.
Beyond the technology itself, ISOs must seek out a vendor that adds human expertise and contextual analysis on top.
For example, Darktraces Security Operations Center (SOC) and Ask the Expert services allow our team at Hexagon to glean insights from their global fleet, partner community, and entire customer base, Lorimer says. Darktrace works with companies across all different industries and geographies, and that context allows us to understand threats and trends that may not have immediately impacted us yet.
Hexagon operates in two key industry sectors: manufacturing and software engineering, and so each facet of the business faces different, specific threats from different threat actors. Darktraces SOC offers insights from broader industry experts and analysts based on their wealth of knowledge.
But even with the best tools, you cant solve every problem. You need to focus on solving the issues that will genuinely affect your ability to deliver to your customers and, thus, your bottom line. You should establish controls that can help manage and reduce that risk.
Its all about getting in front of issues before they can escalate and mapping out potential consequences, Lorimer says. It all comes down to understanding risk for your organization.
For more insight into the current threat landscape and to learn more about how AI can transform your cybersecurity program, dont miss this VB On-Demand event!
Watch free on-demand here.
Youll learn about:
Presenters:
Visit link:
Why AI and autonomous response are crucial for cybersecurity (VB On-Demand) - VentureBeat
Posted in Ai
Comments Off on Why AI and autonomous response are crucial for cybersecurity (VB On-Demand) – VentureBeat
Elementary Named to the 2022 CB Insights AI 100 List of Most Innovative Artificial Intelligence Startups – PR Newswire
Posted: at 6:48 pm
Elementaryrecognized for achievements in machine vision and industrial quality inspections
NEW YORK, May 19, 2022 /PRNewswire/ --CB Insights today named Elementary to its annual AI 100 ranking, showcasing the 100 most promising private artificial intelligence companies in the world.
"This is the sixth year that CB Insights has recognized the most promising private artificial intelligence companies with the AI 100. This year's cohort spans 13 industries, working on everything from recycling plastic waste to improving hearing aids," said Brian Lee, Senior Vice President of CB Insights' Intelligence Unit."Last year's AI 100 companies had a remarkable run, raising more than $6 billion, including 20 mega-rounds worth more than $100 million each. We're excited to watch the companies on this year's list continue to grow and create products and services that meaningfully impact the world around them."
"Manufacturing and supply chain are being forced through the largest transformation we've seen in decades. The global supply chain shock, coupled with increased demand and a difficult labor market, make it imperative that manufacturers find autonomous solutions to automate processes, improve digital intelligence, and increase yield and volume," said Arye Barnehama, Chief Executive Officer and founder of Elementary. "At Elementary, we champion closed-loop quality. Our platform uses edge machine learning to inspect goods and protect production lines from defects. Using cloud technology, inspection data is analyzed for defects and root causes. These AI-driven, real-time insights are then pushed to the factory floor, closing the loop and avoiding defects through operational improvements."
Utilizing theCB Insights platform, the research team picked 100 private market vendors from a pool of over 7,000 companies, including applicants and nominees. They were chosen based on factors including R&D activity,proprietary Mosaic scores, market potential, business relationships, investor profile, news sentiment analysis, competitive landscape, team strength, and tech novelty. The research team also reviewed thousands ofAnalyst Briefings submitted by applicants.
Quick facts about the 2022 AI 100:
About ElementaryElementary delivers an easily scalable, flexible, securly connected machine vision platform that leverages the power of machine learning to open new use cases, provide insights, and close the loop on the manufacturing process. With Elementary Quality as a Service (QaaS), we deploy the inspection hardware, train the machine learning models, integrate with your automation equipment, and provide data analytics. From cameras, lighting and mounting to software and support, we are the single-source product experts, providing everything you need to increase detections, reduce defects and improve productivity.For more information, please visit:https://www.elementaryml.com/.
SOURCE Elementary
See the article here:
Posted in Ai
Comments Off on Elementary Named to the 2022 CB Insights AI 100 List of Most Innovative Artificial Intelligence Startups – PR Newswire
Analysts warn growing AI revolution in farming is not without huge risks – Food Ingredients First
Posted: at 6:48 pm
20 May 2022 --- While artificial intelligence (AI) is on the cusp of driving what some refer to as the next agricultural revolution, researchers are warning that using some of these new technologies at scale holds huge risks that are not being considered.
Even still, many industry watchers believe these systems are pivotal in confronting the global challenge of feeding our ballooning populations more sustainably.
A new risk analysis, published in the journal Nature Machine Intelligence, warns that the future use of artificial intelligence in agriculture comes with substantial potential risks for farms, farmers and food security that are poorly understood and under-appreciated.
The idea of intelligent machines running farms is not science fiction. Large companies are already pioneering the next generation of autonomous ag-bots and decision support systems that will replace humans in the field, highlights Dr. Asaf Tzachor at the University of Cambridges Centre for the Study of Existential Risk, first author of the paper.
But so far no-one seems to have asked the question are there any risks associated with a rapid deployment of agricultural AI?
Large scale intelligent automated systems run the risk of being susceptible to hacking, the researchers warn.Risk of hacking a food networkThe researchers put forward a hypothetical scenario in which the authority for tilling, planting, fertilizing, monitoring and harvesting this field has been delegated to AI.
In this scenario, these algorithms that control drip-irrigation systems, self-driving tractors and combine harvesters are clever enough to respond to the weather and the exact needs of the crop.
These intelligent automated systems would be largely responsible for managing large expanses of crops, being grown for food to feed entire cities worth of people.
Then imagine a hacker messing things up, the papers authors stress.
Despite the huge promise of AI for improving crop management and agricultural productivity, potential risks must be addressed responsibly and new technologies properly tested in experimental settings to ensure they are safe, while securing against accidental failures, unintended consequences and cyber-attacks.
Employing white hats to identify system failuresIn their research, the authors have come up with a catalog of risks that must be considered in the responsible development of AI for agriculture and ways to address them.
In this assessment, they raise the alarm about cyber-attackers potentially causing disruption to commercial farms using AI, by poisoning datasets or by shutting down sprayers, autonomous drones and robotic harvesters.
To guard against this they suggest that white hat hackers help companies uncover any security failings during the development phase, so that systems can be safeguarded against real hackers.
In a scenario associated with accidental failure, the authors suggest that an AI system programmed only to deliver the best crop yield in the short term might ignore the environmental consequences of achieving this, leading to overuse of fertilizers and soil erosion in the long-term.
Meanwhile, the over-application of pesticides in pursuit of high yields could poison ecosystems; while over-application of nitrogen fertilizer would pollute the soil and surrounding waterways.
While AI may help relieve manual labor, it may widen the gaps between commercial and subsistence farmers, the researchers flag.The authors suggest involving applied ecologists in the technology design process to ensure these scenarios are avoided.
Impact on human laborAside from raising farming efficiencies, autonomous AI machine systems can also help improve the working conditions of farmers, relieving them of manual labor.
But without inclusive technology design, socioeconomic inequalities that are currently entrenched in global agriculture including gender, class and ethnic discriminations will remain.
Expert AI farming systems that dont consider the complexities of labor inputs will ignore, and potentially sustain, the exploitation of disadvantaged communities, warns Tzachor.
However, small-scale growers who cultivate the majority of farms worldwide and feed large swaths of the so-called Global South are likely to be excluded from AI-related benefits.
Marginalization, poor internet penetration rates, and the digital divide might prevent smallholders from using advanced technologies, widening the gaps between commercial and subsistence farmers, warn the researchers.
The significant potential risks of AI-based food systems have been similarly echoed in a previous risk analysis from the University of Cambridges Centre for the Study of Existential Risk.
In other research, scientists at Wageningen University and Research in the Netherlands are also delving into the impact of AI on the agri-food space, following US$2.5 million in funding from the Dutch Research Council put toward research of these outcomes.
Tech-to-table takes farming to next levelInnova Market Insights third Top Trend for 2022 Tech to Table is a nod toward how technological advances, such as AI, are offering greater possibilities to change every aspect of a products lifecycle, from conception to consumption.
Earlier this month, Brightseed creator of Forager, a platform that illuminates the link between specific plant compounds and human health outcomes netted US$68 million in series B funding led by Temasek.
By Benjamin Ferrer
To contact our editorial team please email us at editorial@cnsmedia.com
If you found this article valuable, you may wish to receive our newsletters. Subscribe now to receive the latest news directly into your inbox.
Read this article:
Analysts warn growing AI revolution in farming is not without huge risks - Food Ingredients First
Posted in Ai
Comments Off on Analysts warn growing AI revolution in farming is not without huge risks – Food Ingredients First
Australia’s Spending on Artificial Intelligence (AI) to Double to $3.6 Billion by 2025, Says IDC – IDC
Posted: at 6:48 pm
AUSTRALIA, Sydney, May 20, 2022 According to IDC's latest Worldwide Artificial Intelligence Spending Guide, Australia's spending on AI systems will grow to $3.6 billion in 2025, representing a compounded annual growth rate of 24.4% for the period of 2020-25. Organisations have increased their investment in artificial intelligence (AI) to keep up with the changing digital environment, which aids in better business decision-making and customer service.
In Australia, we continue seeing a rapid adoption of artificial intelligence systems across various industries, indicating strong demands from Australian organisations in streamlining of core business processes and maximise the power of their organisational data," says Anastasia Antonova, Senior Market Analyst, Data and Analytics, IDC Australia. "It is going to be very importnant for Australian businesses to continue investing in AI tools and platforms to support decision-making, become more digitally resilient, innovative and be one step ahead from competitors, she added.
With a five-year CAGR of 23.1% banking will be the leading industry in AI spending. The primary goal is to prevent fraud, identify threats advance, and improve the custom recommendation system. The federal/central government is the second-highest spender on AI solutions, with a focus on detecting, monitoring, and responding to personnel and infrastructure threats. Professional Services and retail are the next highest-spending industries. The primary focus area of professional services is to provide better digital assistance by self-regulating and automating mundane software maintenance activities. Retail industries are focusing on improved customer service, expert shopping advisors, and product recommendations, as well as optimized digital supply chain operations.
"To keep up with the changes in the digital environment, more than half the organisations surveyed in Australia are planning to increase their AI spend in 2022," says Vinayaka Venkatesh, Senior Market Analyst at IDC IT Spending Guides, Customer Insights & Analysis. "Organisations have responded more firmly to the changes caused on by the pandemic, increasing their use of AI to improve customer service, business processes, and digital assistance", he added.
The top five use-cases currently account for $0.75 billion, or 40% of total AI spending, and are expected to grow to 1.4 billion by 2025. Investments are directed in augmented customer service agents, augmented threat intelligence and prevention systems, digital assistants, program advisors and recommendation systems, and smart business innovation and automation. These solutions are widely used in the BSFI industry to reduce risk, make better decisions, and provide a better customer service experience. State and federal governments are investing more than 30.5% of total AI spending in Augmented Threat Intelligence and Prevention Systems to identify emerging personal and infrastructure threats and improve public safety.
The leading technology will be software, accounting for more than 52.2 % of AI spending; the largest areas of investment will be in AI applications and Artificial Intelligent Platforms, accounting for more than 57.2 % of total spending, while the rest will go toward AI System Infrastructure Software and AI Application Development & Deployment. The second most important technology is services, which will account for 29.5 % of total AI spending; IT services will account for more than 73.5 % of total spending, with the remainder going to business services. The rest of the spending goes towards hardware with a share of 18.4% of total AI Spending.
TheWorldwide Artificial Intelligence Spending Guide sizes spending for technologies that analyze, organise, access, and provide advisory services based on a range of unstructured information. The Spending Guide quantifies the AI opportunity by providing data for 29 use cases across 19 industries in nine regions and 32 countries. Data is also available for the related hardware, software, and services categories.
Taxonomy Note:The IDCWorldwide Artificial Intelligence Spending Guide uses a very precise definition of what constitutes an AI Application in which the application must have an AI component that is crucial to the application without this AI component the application will not function. This distinction enables the Spending Guide to focus on those software applications that are strongly AI Centric. In comparison, the IDCWorldwide Semiannual Artificial Intelligence Tracker uses a broad definition of AI Applications that includes applications where the AI component is non-centric, or not fundamental, to the application. This enables the inclusion of vendors that have incorporated AI capabilities into their software, but the applications are not exclusively used for AI functions only. In other words, the application will function without the inclusion of the AI component.
-Ends-
About IDC Spending Guides
IDC's Spending Guides provide a granular view of key technology markets from a regional, vertical industry, use case, buyer, and technology perspective. The spending guides are delivered via pivot table format or custom query tool, allowing the user to easily extract meaningful information about each market by viewing data trends and relationships.
For more information about IDC's Spending Guides, please contact Vinay Gupta at vgupta@idc.com
Click here to learn about IDC's full suite of data products and how you can leverage them to grow your business.
About IDC
International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets. With more than 1,100 analysts worldwide, IDC offers global, regional, and local expertise on technology, IT benchmarking and sourcing, and industry opportunities and trends in over 110 countries. IDC's analysis and insight helps IT professionals, business executives, and the investment community to make fact-based technology decisions and to achieve their key business objectives. Founded in 1964, IDC is a wholly owned subsidiary of International Data Group (IDG), the world's leading tech media, data, and marketing services company. To learn more about IDC, please visitwww.idc.com. Follow IDC on Twitter at@IDC andLinkedIn. Subscribe to theIDC Blog for industry news and insights.
"); tb_show("Share the image", "#TB_inline?height=200&width=600&inlineId=embedDialog", null); } function calculateContainerHeight(attachmentId) { var img = $("img[src*=" + attachmentId + "]"); if (img === undefined) { return 600; } else { img = img[0]; } var iframeHeight; if (img.naturalWidth
Follow this link:
Posted in Ai
Comments Off on Australia’s Spending on Artificial Intelligence (AI) to Double to $3.6 Billion by 2025, Says IDC – IDC