Daily Archives: August 8, 2022

The Rise and Fall of a Bitcoin Mining Sensation – WIRED

Posted: August 8, 2022 at 12:36 pm

It was 8:45 in the morning of June 13 when Bill Stewart, the CEO of Maine-based bitcoin mining business Dynamics Mining, received a call from one of his employees. He's like, Every machine inside of our facility in Brunswick [in Cumberland County, Maine] has been taken, Stewart says. That's crazy. I couldn't believe it.

He alerted personnel manning another mining facility, in nearby Lewiston [in Androscoggin County, Maine], and told them to be on their toes. He thought a burglar was at large. Stewart had a theory on who might have taken the machines: In those days he had been wrangling with a customer, Compass Mininga Delaware company that allowed people to buy mining machines and have them hosted in third-party facilities like Stewartsdue to a dispute over energy bills. Stewart thought Compass had to pay for them; Compass believed their contract said otherwise.

A few days earlier, Dynamics had sent Compass a termination letter demanding payment, and shortly thereafter had switched the companys machines off. Then, Compass Mining staffers had taken their equipment away from Brunswick, and they were about to enter the Lewiston plant to recover more machines. They're trying to get inside the building, Stewart says. And I'm telling my brother, who runs our security, Do not let them into the building. We're not ripping miners out of the wall. Do not let them inside.

In a lawsuit filed against Dynamics in the Delaware Court of Chancery on June 21, Compass Mining alleged that Stewart, having refused to foot the energy bill he was supposed to pay, had been holding this valuable equipment hostage to gain leverage in negotiations. The way Stewart tells it, he simply wanted the removal to happen in an orderly fashion as opposed to hastily and under cover of darkness. Whats more, he says, for a while he had considered continuing to host the machines on behalf of Compass customers, cutting out the middleman. Their customers were reaching out, saying, Hey, can we just mine directly with you? Stewart says. The reason that couldnt happen, Stewart says, is that Compass had not given its customers the identifying serial numbers of the machines they had bought, and there was no way for Stewart to know who owned what.

On July 5 the Court granted Compass request to get its machines back, but underlined that that should happen following a formal request to unmount and relocate the machines. Stewart says that during the removal, Compass team also grabbed one of Dynamics own serversthat is confirmed in an email by one of Compass lawyers to Stewart, mentioning how the server had been inadvertently scooped up and asking how to return it.

Our team is laser-focused on serving our clients, and will do so in accordance with the contracts we have in place with our service providers, and by resolving any disputes arising from a fundamental misunderstanding of these contracts in a court of law, Compass interim co-CEO Thomas Heller said in an email interview.

Even if Compass had prevailed, the optics of the row was terrible. Stewart had chronicled the dispute on Twitter as it played outaccusing Compass of owing him hundreds of thousands of dollars in energy bills, and of having essentially broken into Dynamics facilityand thundered at length against Compass in Twitter Spaces. After a vertiginous rise, Compass had spent the last few months in constant crisis mode, untilmere hours after Stewart had started tweeting about his early-morning showdown with the companyit decided to do away with its CEO. At the center of that crisis was Russias war with Ukraine, and a bespectacled, curly-haired cybersecurity entrepreneur called Omar Todd.

Read the original:
The Rise and Fall of a Bitcoin Mining Sensation - WIRED

Posted in Bitcoin | Comments Off on The Rise and Fall of a Bitcoin Mining Sensation – WIRED

Man who threw away 150m in bitcoin hopes AI and robot dogs will get it back – The Guardian

Posted: at 12:36 pm

A computer engineer who accidentally threw away a hard drive containing approximately 150m worth of bitcoin plans to use artificial intelligence to search through thousands of tonnes of landfill.

James Howells discarded the hardware from an old laptop containing 8,000 bitcoins in 2013 during an office clearout and now believes it is sitting in a rubbish dump in Newport, south Wales.

The council has previously denied the 37-year-olds repeated requests to search the site due to environmental concerns but he has hatched a 10m hi-tech scheme backed by hedge fund money to find the digital assets.

His new proposal would utilise AI technology to operate a mechanical arm that would filter the rubbish, before then being picked by hand at a pop-up facility near the landfill site.

Under the plans he will hire a number of environmental and data recovery experts, and while the search is ongoing employ robot dogs as security so no one else can try to steal the elusive hard drive.

Howells said: Digging up a landfill is a huge operation in itself. The funding has been secured. Weve brought on an AI specialist. Their technology can easily be retrained to search for a hard drive.

Weve also got an environmental team on board. Weve basically got a well-rounded team of various experts, with various expertise, which, when we all come together, are capable of completing this task to a very high standard.

Howells believes the search will take about nine to 12 months, however, even if he does get permission from the council, there is no guarantee the hunt will be successful or that the bitcoins he mined all those years ago will be recoverable from the hard drive.

But if they are he has pledged to use the money to help the community of Newport and invest in a number of cryptocurrency-based projects, such as a community-owned data mining facility.

Howells said: Weve got a whole list of incentives, of good cases wed like to do for the community.

One of the things wed like to do on the actual landfill site, once weve cleaned it up and recovered that land, is put a power generation facility, maybe a couple of wind turbines.

Wed like to set up a community-owned mining facility which is using that clean electricity to create bitcoin for the people of Newport.

Sign up to the daily Business Today email or follow Guardian Business on Twitter at @BusinessDesk

However the major issue Howells still has to overcome is getting permission from the council, who will not meet him to discuss his plans or entertain his ideas.

A spokesperson for Newport city council said: We have statutory duties which we must carry out in managing the landfill site.

Part of this is managing the ecological risk to the site and the wider area. Mr Howells proposals pose significant ecological risk which we cannot accept, and indeed are prevented from considering by the terms of our permit.

See the original post here:
Man who threw away 150m in bitcoin hopes AI and robot dogs will get it back - The Guardian

Posted in Bitcoin | Comments Off on Man who threw away 150m in bitcoin hopes AI and robot dogs will get it back – The Guardian

Risks posed by AI are real: EU moves to beat the algorithms that ruin lives – The Guardian

Posted: at 12:34 pm

It started with a single tweet in November 2019. David Heinemeier Hansson, a high-profile tech entrepreneur, lashed out at Apples newly launched credit card, calling it sexist for offering his wife a credit limit 20 times lower than his own.

The allegations spread like wildfire, with Hansson stressing that artificial intelligence now widely used to make lending decisions was to blame. It does not matter what the intent of individual Apple reps are, it matters what THE ALGORITHM theyve placed their complete faith in does. And what it does is discriminate. This is fucked up.

While Apple and its underwriters Goldman Sachs were ultimately cleared by US regulators of violating fair lending rules last year, it rekindled a wider debate around AI use across public and private industries.

Politicians in the European Union are now planning to introduce the first comprehensive global template for regulating AI, as institutions increasingly automate routine tasks in an attempt to boost efficiency and ultimately cut costs.

That legislation, known as the Artificial Intelligence Act, will have consequences beyond EU borders, and like the EUs General Data Protection Regulation, will apply to any institution, including UK banks, that serves EU customers. The impact of the act, once adopted, cannot be overstated, said Alexandru Circiumaru, European public policy lead at the Ada Lovelace Institute.

Depending on the EUs final list of high risk uses, there is an impetus to introduce strict rules around how AI is used to filter job, university or welfare applications, or in the case of lenders assess the creditworthiness of potential borrowers.

EU officials hope that with extra oversight and restrictions on the type of AI models that can be used, the rules will curb the kind of machine-based discrimination that could influence life-altering decisions such as whether you can afford a home or a student loan.

AI can be used to analyse your entire financial health including spending, saving, other debt, to arrive at a more holistic picture, Sarah Kocianski, an independent financial technology consultant said. If designed correctly, such systems can provide wider access to affordable credit.

But one of the biggest dangers is unintentional bias, in which algorithms end up denying loans or accounts to certain groups including women, migrants or people of colour.

Part of the problem is that most AI models can only learn from historical data they have been fed, meaning they will learn which kind of customer has previously been lent to and which customers have been marked as unreliable. There is a danger that they will be biased in terms of what a good borrower looks like, Kocianski said. Notably, gender and ethnicity are often found to play a part in the AIs decision-making processes based on the data it has been taught on: factors that are in no way relevant to a persons ability to repay a loan.

Furthermore, some models are designed to be blind to so-called protected characteristics, meaning they are not meant to consider the influence of gender, race, ethnicity or disability. But those AI models can still discriminate as a result of analysing other data points such as postcodes, which may correlate with historically disadvantaged groups that have never previously applied for, secured, or repaid loans or mortgages.

And in most cases, when an algorithm makes a decision, it is difficult for anyone to understand how it came to that conclusion, resulting in what is commonly referred to as black-box syndrome. It means that banks, for example, might struggle to explain what an applicant could have done differently to qualify for a loan or credit card, or whether changing an applicants gender from male to female might result in a different outcome.

Circiumaru said the AI act, which could come into effect in late 2024, would benefit tech companies that managed to develop what he called trustworthy AI models that are compliant with the new EU rules.

Darko Matovski, the chief executive and co-founder of London-headquartered AI startup causaLens, believes his firm is among them.

The startup, which publicly launched in January 2021, has already licensed its technology to the likes of asset manager Aviva, and quant trading firm Tibra, and says a number of retail banks are in the process of signing deals with the firm before the EU rules come into force.

The entrepreneur said causaLens offers a more advanced form of AI that avoids potential bias by accounting and controlling for discriminatory correlations in the data. Correlation-based models are learning the injustices from the past and theyre just replaying it into the future, Matovski said.

He believes the proliferation of so-called causal AI models like his own will lead to better outcomes for marginalised groups who may have missed out on educational and financial opportunities.

It is really hard to understand the scale of the damage already caused, because we cannot really inspect this model, he said. We dont know how many people havent gone to university because of a haywire algorithm. We dont know how many people werent able to get their mortgage because of algorithm biases. We just dont know.

Matovski said the only way to protect against potential discrimination was to use protected characteristics such as disability, gender or race as an input but guarantee that regardless of those specific inputs, the decision did not change.

He said it was a matter of ensuring AI models reflected our current social values and avoided perpetuating any racist, ableist or misogynistic decision-making from the past. Society thinks that we should treat everybody equal, no matter what gender, what their postcode is, what race they are. So then the algorithms must not only try to do it, but they must guarantee it, he said.

Sign up to the daily Business Today email or follow Guardian Business on Twitter at @BusinessDesk

While the EUs new rules are likely to be a big step in curbing machine-based bias, some experts, including those at the Ada Lovelace Institute, are pushing for consumers to have the right to complain and seek redress if they think they have been put at a disadvantage.

The risks posed by AI, especially when applied in certain specific circumstances, are real, significant and already present, Circiumaru said.

AI regulation should ensure that individuals will be appropriately protected from harm by approving or not approving uses of AI and have remedies available where approved AI systems malfunction or result in harms. We cannot pretend approved AI systems will always function perfectly and fail to prepare for the instances when they wont.

See the rest here:

Risks posed by AI are real: EU moves to beat the algorithms that ruin lives - The Guardian

Posted in Ai | Comments Off on Risks posed by AI are real: EU moves to beat the algorithms that ruin lives – The Guardian

Artificial Intelligence: 3 ways the pandemic accelerated its adoption – The Enterprisers Project

Posted: at 12:34 pm

The need for organizations to quickly create new business models and marketing channels has accelerated AI adoption throughout the past couple of years. This is especially true in healthcare, where data analytics accelerated the development of COVID-19 vaccines. In consumer-packaged goods, Harvard Business Reviewreportedthat Frito-Lay created an e-commerce platform,Snacks.com, in just 30 days.

The pandemic also accelerated AI adoption in education, as schools were forced to enable online learning overnight. And wherever possible, the world shifted to touchless transactions, completely transforming the banking industry.

Three technology developments during the pandemic accelerated AI adoption:

[ Also readArtificial Intelligence: How to stay competitive. ]

Lets look at the pros and cons of these developments for IT leaders.

Even 60 years after Moores Law, computing power is increasing, with more powerful machines and more processing power through new chips from companies like NVidia.AI Impactsreports that computing power available per dollar has probably increased by a factor of ten roughly every four years over the last quarter of a century (measured in FLOPS or MIPS). However, the rate has been slower over the past 6-8 years.

Pros: More for less

Inexpensive computing gives IT leaders more choices, enabling them to do more with less.

Cons: Too many choices can lead to wasted time and money

Consider big data. With inexpensive computing, IT pros want to wield its power. There is a desire to start ingesting and analyzing all available data, leading to better insights, analysis, and decision-making.

But if you are not careful, you could end up with massive computing power and not enough real-life business applications.

As networking, storage, and computing costs drop, the human inclination is to use them more. But they dont necessarily deliver business value to everything.

Before the pandemic, the terms data warehouses and data lakes were standard and they remain so today. But new data architectures like data fabric and data mesh were almost non-existent. Data fabric enables AI adoption because it enables enterprises to use data to maximize their value chain by automating data discovery, governance, and consumption. Organizations can provide the right data at the right time, regardless of where it resides.

Pros: IT leaders will have the opportunity to rethink data models and data governance

It provides a chance to buck the trend toward centralized data repositories or data lakes. This might mean more edge computing and data available where it is most relevant. These advancements result in appropriate data being automatically available for decisioning critical to AI operability.

Cons: Not understanding the business need

IT leaders need to understand the business and AI aspects of new data architectures. If they dont know what each part of the business needs including the kind of data and where and how it will be used they may not create the correct type of data architecture and data consumption for proper support. ITs understanding of the business needs, and the business models that go with that data architecture, will be essential.

Statistaresearch underscores the growth of data: The total amount of data created, captured, copied, and consumed globally was 64.2 zettabytes in 2020 and is projected to reach more than 180 zettabytes in 2025. Statista research from May 2022 reports, The growth was higher than previously expected, caused by the increased demand due to the COVID-19 pandemic. Big data sources include media, cloud, IoT, the web, and databases.

Pros: Data is powerful

Every decision and transaction can be traced back to a data source. If IT leaders can use AIOps/MLOps to zero in on data sources for analysis and decision-making, they are empowered. Proper data can deliver instant business analysis and provide deep insights for predictive analysis.

Cons: How do you know what data to use?

More on artificial intelligence

Besieged by data from IoT, edge computing, formatted and unformatted, intelligent and unintelligible IT leaders are dealing with the 80/20 rule: What are the 20 percent credible data sources that deliver 80 percent of the business value? How do you use AI/ML ops to determine the credible data sources, and what data source should be used for analysis and decision-making? Every organization needs to find answers to these questions.

AI is becoming ubiquitous, powered by new algorithms and increasingly plentiful and inexpensive computing power. AI technology has been on an evolutionary road for more than 70 years. The pandemic did not accelerate the development of AI; it accelerated its adoption.

Harnessing AI is the challenge ahead.

[ Want best practices for AI workloads? Get theeBook: Top considerations for building a production-ready AI/ML environment. ]

Visit link:

Artificial Intelligence: 3 ways the pandemic accelerated its adoption - The Enterprisers Project

Posted in Ai | Comments Off on Artificial Intelligence: 3 ways the pandemic accelerated its adoption – The Enterprisers Project

You Need To Stop Doing This On Your AI Projects – Forbes

Posted: at 12:34 pm

Its easy to get excited about AI projects. Especially when you hear about all the amazing things people are doing with AI, from conversational and natural language processing (NLP) systems, to image recognition, autonomous systems and great predictive analytics and pattern and anomaly detection capabilities. However, when people get excited about AI projects, they tend to overlook some significant red flags. And its those red flags that are causing over 80% of AI projects to fail.

One of the biggest reasons for AI project failure is that companies dont justify the use of AI from an return on investment (ROI) perspective. Simply put, theyre not worth the time and expense given the cost, complexity, and difficulty of implementing the AI systems.

Organizations rush past the exploration phase of AI adoption, jumping from simple proof-of-concept demos right to production without first assessing whether the solution will provide any positive return. One big reason for this is that measuring AI project ROI can prove more difficult than first expected. Far too often teams are getting pressure from upper management, colleagues, or external teams to just get started with their AI efforts, and projects move forward without a clear answer to the problem they are actually trying to solve or the ROI thats going to be seen. When companies struggle to develop a clear understanding of what to expect when it comes to the ROI of AI, misalignment of expectations is always the result.

Missing and Misaligned ROI Expectations

So, what happens when the ROI of an AI project isnt aligned with expectations from management? One of the most common reasons why AI projects fail is the ROI is not justified by the investment of money, resources, and time. If you're going to be spending your time, effort, human resources, and money implementing an AI system, you want to get a well-identified positive return.

Even worse than a misaligned ROI is the fact that many organizations arent even measuring or quantifying ROI to begin with. ROI can be measured in a variety of ways from a financial return such as generating income or reducing expenses, but it can also be measured as a return on time, shifting or reallocating of critical resources, improving reliability and safety, reducing errors and improving quality control, or improving security and compliance. Its easy to see how an AI project could provide a positive ROI if you spend a hundred thousand dollars on an AI project to eliminate two million dollars of potential cost or liability, then its worth every dollar spent to reduce the liability. But youll only see that ROI if you actually plan for it ahead of time and manage that ROI.

Management guru Peter Drucker once famously said, you can't manage what you don't measure. The act of measuring and managing AI ROI is what sets apart those who see positive value from AI from those who end up canceling their projects years and millions of dollars into their efforts.

Boiling the Ocean and Biting off More than You Can Chew

Another big reason why companies arent seeing the ROI they are expecting is that projects are trying to bite off way too much all at once. Iterative, agile best-practices, especially those employed by best practice AI methodologies such as CPMAI clearly advise project owners to Think Big. Start Small. Iterate Often. There are unfortunately many unsuccessful AI implementations that have taken the opposite approach by thinking big, starting big, and iterating infrequently. One case in point is Walmarts investment in AI-powered robots for inventory management. In 2017 Walmart invested in robots to scan store shelves, and by 2022 they pulled them out of stores.

Clearly Walmart had sufficient resources and smart people. So you cant blame their failure on bad people or bad technology. Rather, the main issue was a bad solution to the problem. Walmart realized that it was just cheaper and easier to use human employees they already had working in the stores to complete the same tasks the robot was supposed to do. Another example of a project not returning the expected results can be found with the various applications of the Pepper robot in supermarkets, museums, and tourist areas. Better people or better technology wouldnt have solved this problem. Rather just a better approach to managing and evaluating AI projects. Methodology, folks.

Adopting a Step-by-step approach to running AI and machine learning projects

Did these companies get caught up in the hype of the technology? Were these companies just looking to have a robot roaming the halls for the cool factor? Because being cool isnt solving any real business problems nor solving a pain point. Don't do AI for the sake of AI. If you do AI just for the sake of AI then don't be surprised when you don't have a positive ROI.

So, what can companies do to ensure positive ROI for their projects? First, stop implementing AI projects for AIs sake. Successful companies are adopting a step by step approach to running AI and machine learning projects. As mentioned earlier, methodology is often the missing secret sauce to successful AI projects. Organizations are now seeing benefit in employing approaches such as the Cognitive Project Management for AI (CPMAI) methodology, built upon decades-old data centric project approaches such as CRISP-DM and incorporating established best-practice agile approaches to provide for short, iterative sprints for projects.

These approaches all start with the business user and requirements in mind. The very first step of CRISP-DM, CPMAI, and even Agile is to figure out if you should even move forward with an AI project. These methodologies suggest alternate approaches, such as automation or straight up programming or even just more people might be more appropriate to solve the problem at hand.

The AI Go No Go Analysis

AI Go No Decisions, CPMAI Methodology, Cognilytica

If AI is the right solution then you need to make sure that you answer yes to a variety of different questions to assess if youre ready to embark on your AI project. The set of questions you need to ask to determine whether to move forward with an AI project is called the AI Go No Go analysis and this is part of the very first phase in the CPMAI methodology. The AI Go No Go analysis has users ask a series of nine questions in three general categories. In order for an AI project to actually go forward, you need three things in alignment: the business feasibility, the data feasibility, and the technology / execution feasibility. The first of the three general categories asks about the business feasibility and asks you if there is a clear problem definition, if the organization is actually willing to invest in this change once created, and if there is sufficient ROI or impact.

These may seem like very basic questions, but far too often these very simple questions are skipped. The second set of questions deals with data including data quality, data quantity, and data access considerations. The third set of questions is around implementation including whether you have the correct team and skill sets needed, can execute the model as required, and that the model can be used where planned.

The most difficult part of asking these questions is being honest with the answers. Its important to be really honest when addressing whether to move forward with the project, and if you answer no to one or more of these questions it means either you're not ready to move forward yet or you should not move forward at all. Dont just plow ahead and do it anyway because if you do, don't be surprised when youve wasted a lot of time, energy and resources and dont get the ROI you were hoping for.

Here is the original post:

You Need To Stop Doing This On Your AI Projects - Forbes

Posted in Ai | Comments Off on You Need To Stop Doing This On Your AI Projects – Forbes

How can CIOs build the next generation of AI talent? – Wire19

Posted: at 12:34 pm

As technological innovation continues to accelerate and artificial intelligence (AI) becomes more prevalent, businesses are looking for ways to build the next generation of AI talent. According to Gartner, over 80% of Internet of Things (IoT) activities in enterprises will be employing AI and machine learning. Skilled workers are the most important factor in AI development. Although technology and algorithms have become commoditized, there is a big demand for workers who can solve problems with AI.

Here are a few things CIOs can do in order to make this happen.

Nurture the next generation of AI talent

Nurture and grow next-gen AI talent through continuous innovation where industry, science, engineering, and human ingenuity intersect. They need to give talented AI professionals a good place to work. They need to make sure they have the freedom to create value and meet their expectations. Create tech hubs to grow your local ecosystems and build the next generation of AI talent now.

Merit to education in AI

AI workshops, certifications, and bootcamps do not have any educational merit and do not build practitioner level skills. You need to build AI education around the intellectual infrastructure that already exists in local academic communities. The centers of excellence that engage via an eight-stakeholder model must form out of those communities to make AI education effective and bring merit to education in AI. CIOs need to identify the areas where the local ecosystem is lacking and use this as an opportunity to create value. This means that the technology and academic communities in each region need to work together to build local AI centers of excellence. Education is required to lead in the field of artificial intelligence. Thats why it is so important to make sure the academic system for this discipline is good.

Support from national governments

National governments need to support AI ecosystems from the grassroot level. Each stakeholder in an AI ecosystem has a role to play in order to build a value network that goes from the local government up to a federal policymaker. An AI ecosystem is made up of eight different stakeholders, and each one has different goals. For these stakeholders to achieve their goals, they need the support of the government. National governments should recognize AI degrees and education immediately at the graduate school level.

In what seems to align with Indian Prime Minister Narendra Modis Digital India vision, Deloitte and IIT Roorkee have announced a collaboration to empower and build the next generation of Indian talent in the field of AI. Deloitte and IIT Roorkee will together deliver rigorous, immersive programs in AI and machine learning that are designed to build the next generation workforce. This will revolutionize how organizations and academia work together to overcome the AI talent gap by imparting industry-relevant skills to Indian talent in new-age tools and developing future leaders who are highly proficient in AI.

The future of AI is bright, and businesses need to start preparing now for the talent they will need in the future. By considering the suggestions weve outlined, CIOs can make sure their business is at the forefront of this exciting industry. Are you ready to build the next gen of AI talent?

Also read:Automate your work processes with Digital Employees

More:

How can CIOs build the next generation of AI talent? - Wire19

Posted in Ai | Comments Off on How can CIOs build the next generation of AI talent? – Wire19

AI asked to create an image of what death looks like – TweakTown

Posted: at 12:34 pm

An artificial intelligence has been asked to create an image of what death looks like, and the results are simply stunning.

The artificial intelligence (AI) that was asked to create the images seen in the above video is called MidJourney, which was created by David Holtz, co-founder of Leap Motion, and is currently run by a small self-funded team that has several well-known advisors such as Jim Keller, known for his work at AMD, Apple, Tesla, and Intel, Nat Friedman, the CEO of Github, and Bill Warner, the founder of Avid Technology and inventor of nonlinear video editing.

MidJourney is an incredible piece of technology, and it recently went into open beta, which means anyone can try it by simply heading over to its dedicated Discord server. Users can enter "/imagine", followed by a text prompt of what they want the AI to produce. Users have been testing the AI's capabilities by entering descriptive words such as HD, hyper-realistic, 4K, wallpaper, and more. All of which work perfectly.

As for the predictive capability of MidJourney, none of the images seen in this article or any other source should be taken as a prediction. MidJourney was created to expand the human species' imaginative power, not predictions.

Using MidJourney's image generation algorithms, users are able to create ultra-realistic images of whatever they wish. The possibilities are truly endless, and with accurate text inputs, you can create wallpaper-worthy images. I tested the AI and created several images that are now being used as wallpapers, but what was more impressive was what the other users in the Discord were making. Below are some examples of what I found and what the user inputted into the AI to get the result.

Use MidJourney AI here.

VIEW GALLERY - 6 IMAGES

- A detailed futuristic soldier portrait gas mask, slightly visible shoulders, explosion in background

- A detailed oli painting of final fantasy XIII versus battle of light and darkness

- Universe

- A young boy sleeping on a mat , smiling at the camera , big brown eyes , hyper realistic , 4K , very clear

- Cyberpunk cat, 4K, red glasses, ultra realistic

The rest is here:

AI asked to create an image of what death looks like - TweakTown

Posted in Ai | Comments Off on AI asked to create an image of what death looks like – TweakTown

The Computer Scientist Trying to Teach AI to Learn Like We Do – Quanta Magazine

Posted: at 12:34 pm

Kanan has been toying with machine intelligence nearly all his life. As a kid in rural Oklahoma who just wanted to have fun with machines, he taught bots to play early multi-player computer games. That got him wondering about the possibility of artificial general intelligence a machine with the ability to think like a human in every way. This made him interested in how minds work, and he majored in philosophy and computer science at Oklahoma State University before his graduate studies took him to the University of California, San Diego.

Now Kanan finds inspiration not just in video games, but also in watching his nearly 2-year-old daughter learn about the world, with each new learning experience building on the last. Because of his and others work, catastrophic forgetting is no longer quite as catastrophic.

Quanta spoke with Kanan about machine memories, breaking the rules of training neural networks, and whether AI will ever achieve human-level learning. The interview has been condensed and edited for clarity.

It has served me very well as an academic. Philosophy teaches you, How do you make reasoned arguments, and How do you analyze the arguments of others? Thats a lot of what you do in science. I still have essays from way back then on the failings of the Turing test, and things like that. And so those things I still think about a lot.

My lab has been inspired by asking the question: Well, if we cant do X, how are we going to be able to do Y? We learn over time, but neural networks, in general, dont. You train them once. Its a fixed entity after that. And thats a fundamental thing that youd have to solve if you want to make artificial general intelligence one day. If it cant learn without scrambling its brain and restarting from scratch, youre not really going to get there, right? Thats a prerequisite capability to me.

The most successful method, called replay, stores past experiences and then replays them during training with new examples, so they are not lost. Its inspired by memory consolidation in our brain, where during sleep the high-level encodings of the days activities are replayed as the neurons reactivate.

In other words, for the algorithms, new learning cant completely eradicate past learning since we are mixing in stored past experiences.

There are three styles for doing this. The most common style is veridical replay, where researchers store a subset of the raw inputs for example, the original images for an object recognition task and then mix those stored images from the past in with new images to be learned. The second approach replays compressed representations of the images. A third far less common method is generative replay. Here, an artificial neural network actually generates a synthetic version of a past experience and then mixes that synthetic example with new examples. My lab has focused on the latter two methods.

Unfortunately, though, replay isnt a very satisfying solution.

Read more:

The Computer Scientist Trying to Teach AI to Learn Like We Do - Quanta Magazine

Posted in Ai | Comments Off on The Computer Scientist Trying to Teach AI to Learn Like We Do – Quanta Magazine

Here’s Why Businesses Are Having A Tumultuous Love-Hate Relationship With AI Ethics Boards – Forbes

Posted: at 12:34 pm

AI Ethics Advisory Boards are essential but also require focus and attention, else they can fall ... [+] apart and be untoward for all concerned.

Should a business establish an AI Ethics advisory board?

You might be surprised to know that this is not an easy yes-or-no answer.

Before I get into the complexities underlying the pros and cons of putting in place an AI Ethics advisory board, lets make sure we are all on the same page as to what an AI Ethics advisory board consists of and why it has risen to headline-level prominence.

As everyone knows, Artificial Intelligence (AI) and the practical use of AI for business activities have gone through the roof as a must-have for modern-day companies. You would be hard-pressed to argue otherwise. To some degree, the infusion of AI has made products and services better, plus at times led to lower costs associated with providing said products and services. A nifty list of efficiencies and effectiveness boosts can be potentially attributed to the sensible and appropriate application of AI. In short, the addition or augmenting of what you do by incorporating AI can be a quite profitable proposition.

There is also the shall we say big splash that comes with adding AI into your corporate endeavors.

Businesses are loud and proud about their use of AI. If the AI just so happens to also improve your wares, thats great. Meanwhile, claims of using AI are sufficiently attention-grabbing that you can pretty much be doing the same things you did before, yet garner a lot more bucks or eyeballs by tossing around the banner of AI as being part of your business strategy and out-the-door goods.

That last point about sometimes fudging a bit about whether AI is really being used gets us edging into the arena of AI Ethics. There is all manner of outright false claims being made about AI by businesses. Worse still, perhaps, consists of using AI that turns out to be the so-called AI For Bad.

For example, youve undoubtedly read about the many instances of AI systems using Machine Learning (ML) or Deep Learning (DL) that have ingrained racial biases, gender biases, and other undue improper discriminatory practices. For my ongoing and extensive coverage of these matters relating to adverse AI and the emergence of clamoring calls for AI Ethics and Ethical AI, see the link here and the link here, just to name a few.

So, we have these sour drivers hidden within the seemingly all-rosy AI-use by businesses:

How do these kinds of thoughtless or disgraceful practices arise in companies?

One notable piece of the puzzle is a lack of AI Ethics awareness.

Top executives might be unaware of the very notion of devising AI that abides by a set of Ethical AI precepts. The AI developers in such a firm might have some awareness of the matter, though perhaps they are only familiar with AI Ethics theories and do not know how to bridge the gap in day-to-day AI development endeavors. There is also the circumstance of AI developers that want to embrace AI Ethics but then get a strong pushback when managers and executives believe that this will slow down their AI projects and bump up the costs of devising AI.

A lot of top executives do not realize that a lack of adhering to AI Ethics is likely to end up kicking them and the company in their posterior upon the release of AI which is replete with thorny and altogether ugly issues. A firm can get caught with bad AI in its midst that then woefully undermines the otherwise long-time built-up reputation of the firm (reputational risk). Customers might choose to no longer use the company's products and services (customer loss risk). Competitors might capitalize on this failure (competitive risk). And there are lots of attorneys ready to aid those that have been transgressed, aiming to file hefty lawsuits against firms that have allowed rotten AI into their company wares (legal risk).

In brief, the ROI (return on investment) for making suitable use of AI Ethics is almost certainly more beneficial than in comparison to the downstream costs associated with sitting atop a stench of bad AI that should not have been devised nor released.

Turns out that not everyone has gotten that memo, so to speak.

AI Ethics is only gradually gaining traction.

Some believe that inevitably the long arm of the law might be needed to further inspire the adoption of Ethical AI approaches.

Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have distinctive laws to govern various development and uses of AI. New laws are indeed being bandied around at the international, federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a measured one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. In fact, they forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages. See for example my coverage at the link here and the link here.

Lets make sure we are all on the same page about what the basics of AI Ethics contain.

In my column coverage, Ive previously discussed various collective analyses of AI Ethics principles, such as this assessment at the link here, which proffers a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems:

Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.

All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that only coders or those that program the AI are subject to adhering to the AI Ethics notions. It takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

A means of trying to introduce and keep sustained attention regarding the use of AI Ethics precepts can be partially undertaken via establishing an AI Ethics advisory board.

We will unpack the AI Ethics advisory board facets next.

AI Ethics Boards And How To Do Them Right

Companies can be at various stages of AI adoption, and likewise at differing stages of embracing AI Ethics.

Envision a company that wants to get going on AI Ethics embracement but isnt sure how to do so. Another scenario might be a firm that already has dabbled with AI Ethics but seems unsure of what needs to be done in furtherance of the effort. A third scenario could be a firm that has been actively devising and using AI and internally has done a lot to embody AI Ethics, though they realize that there is a chance that they are missing out on other insights perhaps due to internal groupthink.

For any of those scenarios, putting in place an AI Ethics advisory board might be prudent.

The notion is rather straightforward (well, to clarify, the overall notion is the proverbial tip of the iceberg and the devil is most certainly in the details, as we will momentarily cover).

An AI Ethics advisory board typically consists of primarily external advisors that are asked to serve on a special advisory board or committee for the firm. There might also be some internal participants included in the board, though usually the idea is to garner advisors from outside the firm and that can bring a semi-independent perspective to what the company is doing.

I say semi-independent since there are undoubtedly going to be some potential independence conflicts that can arise with the chosen members of the AI Ethics advisory board. If the firm is paying the advisors, it raises the obvious question of whether the paid members feel reliant on the firm for a paycheck or that they might be uneasy criticizing the gift horse they have in hand. On the other hand, businesses are used to making use of outside paid advisors for all manner of considered independent opinions, so this is somewhat customary and expected anyway.

The AI Ethics advisory board is usually asked to meet periodically, either in-person or on a virtual remote basis. They are used as a sounding board by the firm. The odds are too that the members are being provided with various internal documents, reports, and memos about the efforts afoot related to AI at the firm. Particular members of the AI Ethics advisor board might be asked to attend internal meetings as befitting their specific expertise. Etc.

Besides being able to see what is going on with AI within the firm and provide fresh eyes, the AI Ethics advisory board usually has a dual role of being an outside-to-inside purveyor of the latest in AI and Ethical AI. Internal resources might not have the time to dig into what is happening outside of the firm and ergo can get keenly focused and tailored state-of-the-art viewpoints from the AI Ethics advisory board members.

There are also the inside-to-outside uses of an AI Ethics advisory board too.

This can be tricky.

The concept is that the AI Ethics advisory board is utilized to let the outside world know what the firm is doing when it comes to AI and AI Ethics. This can be handy as a means of bolstering the reputation of the firm. The AI-infused products and services might be perceived as more trustworthy due to the golden seal of approval from the AI Ethics advisory board. In addition, calls for the firm to be doing more about Ethical AI can be somewhat blunted by pointing out that an AI Ethics advisory board is already being utilized by the company.

Questions that usually are brought to an AI Ethics advisory board by the firm utilizing such a mechanism often include:

Tapping into an AI Ethics advisory board assuredly makes sense and firms have been increasingly marching down this path.

Please be aware that there is another side to this coin.

On one side of the coin, AI Ethics advisory boards can be the next best thing since sliced bread. Do not neglect the other side of the coin, namely they can also be a monumental headache and you might regret that you veered into this dicey territory (as youll see in this discussion, the downsides can be managed, if you know what you are doing).

Companies are beginning to realize that they can find themselves in a bit of a pickle when opting to go the AI Ethics advisory board route. You could assert that this machination is somewhat akin to playing with fire. You see, fire is a very powerful element that you can use to cook meals, protect you from predators whilst in the wilderness, keep you warm, bring forth light, and provide a slew of handy and vital benefits.

Fire can also get you burned if you arent able to handle it well.

There have been various news headlines of recent note that vividly demonstrate the potential perils of having an AI Ethics advisory board. If a member summarily decides that they no longer believe that the firm is doing the right Ethical AI activities, the disgruntled member might quit in a huge huff. Assuming that the person is likely to be well-known in the AI field or industry all-told, their jumping ship is bound to catch widespread media attention.

A firm then has to go on the defense.

Why did the member leave?

What is the company nefariously up to?

Some firms require that the members of the AI Ethics advisory board sign NDAs (non-disclosure agreements), which seemingly will protect the firm if the member decides to go rogue and trash the company. The problem though is that even if the person remains relatively silent, there is nonetheless a likely acknowledgment that they no longer serve on the AI Ethics advisory board. This, by itself, will raise all kinds of eyebrow-raising questions.

Furthermore, even if an NDA exists, sometimes the member will try to skirt around the provisions. This might include referring to unnamed wink-wink generic case studies to highlight AI Ethics anomalies that they believe the firm insidiously was performing.

The fallen member might be fully brazen and come out directly naming their concerns about the company. Whether this is a clear-cut violation of the NDA is somewhat perhaps less crucial than the fact that the word is being spread of Ethical AI qualms. A firm that tries to sue the member for breach of the NDA can brutally bring hot water onto themselves, stoking added attention to the dispute and appearing to be the classic David versus Goliath duel (the firm being the large monster).

Some top execs assume that they can simply reach a financial settlement with any member of the AI Ethics advisory board that feels the firm is doing the wrong things including ignoring or downplaying voiced concerns.

This might not be as easy as one assumes.

Oftentimes, the members are devoutly ethically minded and will not readily back down from what they perceive to be an ethical right-versus-wrong fight. They might also be otherwise financially stable and not willing to shave their ethical precepts or they might have other employment that remains untouched by their having left the AI Ethics advisory board.

As might be evident, some later realize that an AI Ethics advisory board is a dual-edged sword. There is a tremendous value and important insight that such a group can convey. At the same time, you are playing with fire. It could be that a member or members decide they no longer believe that the firm is doing credible Ethical AI work. In the news have been indications of at times an entire AI Ethics advisory board quitting together, all at once, or having some preponderance of the members announcing they are leaving.

Be ready for the good and the problems that can arise with AI Ethics advisory boards.

Of course, there are times that companies are in fact not doing the proper things when it comes to AI Ethics.

Therefore, we would hope and expect that an AI Ethics advisory board at that firm would step up to make this known, presumably internally within the firm first. If the firm continues on the perceived bad path, the members would certainly seem ethically bound (possibly legally too) to take other action as they believe is appropriate (members should consult their personal attorney for any such legal advice). It could be that this is the only way to get the company to change its ways. A drastic action by a member or set of members might seem to be the last resort that the members hope will turn the tide. In addition, those members likely do not want to be part of something that they ardently believe has gone astray from AI Ethics.

A useful way to consider these possibilities is this:

The outside world wont necessarily know whether the member that exits has a bona fide basis for concern about the firm or whether it might be some idiosyncratic or misimpression by the member. There is also the rather straightforward possibility of a member leaving the group due to other commitments or for personal reasons that have nothing to do with what the firm is doing.

The gist is that it is important for any firm adopting an AI Ethics advisory board to mindfully think through the entire range of life cycle phases associated with the group.

With all that talk of problematic aspects, I dont want to convey the impression of staying clear of having an AI Ethics advisory board. That is not the message. The real gist is to have an AI Ethics advisory board and make sure you do so the right way. Make that into your cherished mantra.

Here are some of the oft-mentioned benefits of an AI Ethics advisory board:

Here are common ways that firms mess up and undercut their AI Ethics advisory board (dont do this!):

Another frequently confounding problem involves the nature and demeanor of the various members that are serving on an AI Ethics advisory board, which can sometimes be problematic in these ways:

Some firms just seem to toss together an AI Ethics advisory board on a somewhat willy-nilly basis. No thought goes toward the members to be selected. No thought goes toward what they each bring to the table. No thought goes toward the frequency of meetings and how the meetings are to be conducted. No thought goes toward running the AI Ethics advisory board, all told. Etc.

In a sense, by your own lack of resourcefulness, you are likely putting a train wreck in motion.

Dont do that.

Perhaps this list of the right things to do is now ostensibly obvious to you based on the discourse so far, but you would be perhaps shocked to know that few firms seem to get this right:

Conclusion

A few years ago, many of the automakers and self-driving tech firms that are embarking upon devising AI-based self-driving cars were suddenly prompted into action to adopt AI Ethics advisory boards. Until that point in time, there had seemed to be little awareness of having such a group. It was assumed that the internal focus on Ethical AI would be sufficient.

Ive discussed at length in my column the various unfortunate AI Ethics lapses or oversights that have at times led to self-driving car issues such as minor vehicular mishaps, overt car collisions, and other calamities, see my coverage at the link here. The importance of AI safety and like protections has to be the topmost consideration for those making autonomous vehicles. AI Ethics advisory boards in this niche are helping to keep AI safety a vital top-of-mind priority.

My favorite way to express this kind of revelation about AI Ethics is to liken the matter to earthquakes.

Californians are subject to earthquakes from time to time, sometimes rather hefty ones. You might think that being earthquake prepared would be an ever-present consideration. Not so. The cycle works this way. A substantive earthquake happens and people get reminded of being earthquake prepared. For a short while, there is a rush to undertake such preparations. After a while, the attention to this wanes. The preparations fall by the wayside or are otherwise neglected. Boom, another earthquake hits, and all those that should have been prepared are caught unawares as though they hadnt realized that an earthquake could someday occur.

Firms often do somewhat the same about AI Ethics advisory boards.

They dont start one and then suddenly, upon some catastrophe about their AI, they reactively are spurred into action. They flimsily start an AI Ethics advisory board. It has many of the troubles Ive earlier cited herein. The AI Ethics advisory board falls apart. Oops, a new AI calamity within the firm reawakens the need for the AI Ethics advisory board.

Wash, rinse, and repeat.

Businesses definitely find that they sometimes have a love-hate relationship with their AI Ethics advisory board efforts. When it comes to doing things the right way, love is in the air. When it comes to doing things the wrong way, hate ferociously springs forth. Make sure you do what is necessary to keep the love going and avert the hate when it comes to establishing and maintaining an AI Ethics advisory board.

Lets turn this into a love-love relationship.

See the rest here:

Here's Why Businesses Are Having A Tumultuous Love-Hate Relationship With AI Ethics Boards - Forbes

Posted in Ai | Comments Off on Here’s Why Businesses Are Having A Tumultuous Love-Hate Relationship With AI Ethics Boards – Forbes

Artificial Intelligence Revolutionizing Content Writing – Entrepreneur

Posted: at 12:34 pm

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

The idea of Pepper Content germinated in a dormitory of BITS, Pilani. The story of the founders was similar to that of average Indian teenagers who wanted to pursue engineering.

The founders realized a shared passion for content. It was clear that for brands, smartphones and the Internet had changed the principles of customer engagement and experience principles. More than 700 million Internet users, businesses included, were accessing and consuming different forms of content daily. However, access to quality content was not as easy.

"We asked ourselves that if, in this instant noodle economy, items like food and medicine get ordered and delivered at the tap of a button, then why can't content be treated and delivered the same way? Every company in the world has a content need. In today's day and age, this opportunity stands at a staggering $400 billion globally. This was when we began the B2B content marketplace, Pepper Content, in 2017," said Anirudh Singla, co-founder and CEO, Pepper Content.

The co-founders with limited resources, ongoing classes, assignments, and exams, persisted in achieving their dreams. In 2017, the company received its first order of 250 articles on automotives. Pepper Content enables marketers to connect with the best writers, designers, translators, videographers, editors, and illustrators, and vets the marketplace's creative professionals using its AI algorithms to make the right match between business and creative professionals. To support its creators, Pepper Content has invested in building tools that augment their ability and make them more productive, and one of its key products Peppertype.ai is currently being used by over 200,000 users across 150 countries. The company has on-boarded over 1,000 enterprises and fast-growing startups, and works with over 2,500 customers, including organizations such as Adani Enterprises, NPS Trust, Hindustan Unilever, P&G; financial services, and insurance companies such as HDFC Bank, CRED, Groww, SBI Mutual Fund, TATA Capital, and technology firms such as Binance, Google, and Adobe.

According to the co-founders, Pepper Content is not a startup or an agency but a platform that connects people seamlessly. The company aims to create the perfect symphony between creators and brands when it comes to content. The company is enabling strategic collaboration that will have a tangible, on-ground impact.

The co-founders always wanted to take a product-first approach which meant understanding the nuances and solving for every use case. The first products were hyper-customised sheets with deep linking of formulae and scripts that enabled the company to piece together workflows. The team worked on 25,000 content pieces on Google sheets and docs in the initial stages that helped the co-founders understand the customer workflow.

Businesses can directly order quality content on the platform with faster turnaround times and complete transparency on the project's progress. The company's intelligent algorithms take care of all the management aspects: from finding the best creator-project match to running agile workflows and driving integrated tool-supported editorial checks for quality content delivery.

"The content marketing industry stands at $400 billion, globally and it is only going to scale further. However, no organised players are enabling seamless workflow for brands. Every company produces and outsources content in written, image, audio, and video formats. To date, companies are required to post requirements, bid for projects and choose from a large list of bidders, and negotiate pay, making it cumbersome and, frankly, unscalable. We are solving this by offering a managed marketplace. We take care of entire content operations, right from the ordering flow to end-to-end delivery. For companies, quality content delivery creates trust and for creators, takes care of timely payments and operational inefficiencies," said Rishabh Shekhar, co-founder and COO, Pepper Content.

The co-founders struggled in the initial days since they did not know anyone from the investor community. "We cold-emailed 80 VC and angel investors! There were a lot of questions and conversations about the company's scale and our age. It took us three months but we persisted and were oversubscribed for the seed funding round. Over the years we scaled a B2B content marketplace, built a product that was unheard of, and have credible investors backing us. We realized that age is no hindrance if your vision is clear and you have a product that creates real impact."

More:

Artificial Intelligence Revolutionizing Content Writing - Entrepreneur

Posted in Ai | Comments Off on Artificial Intelligence Revolutionizing Content Writing – Entrepreneur