Daily Archives: December 25, 2019

How is Artificial Intelligence (AI) Changing the Future of Architecture? – AiThority

Posted: December 25, 2019 at 6:51 am

Artificial Intelligence (AI) has always been a topic of discussion- is it good enough for us? Getting more and more into this high technology world will give us a better future or not? According to recent research, almost everyone has a different requirement for automation. And most of the work of humans is done by the latest high intelligence computers. You all must be familiar with the fact of how Artificial Intelligence is changing industries, like Medicine, Automobiles, and Manufacturing. Well, what about Architecture?

The main issue is about the fact that these high tech robots will actually replace the creator? Although these high tech computers are not good enough at some ideas and you have to rely on Human Intelligence for that. However, these can be used to save a lot of time by doing some time-consuming tasks, and we can utilize that time in creating some other designs.

Artificial Intelligence is a high technology mechanical system that can perform any task but needs a few human efforts like visual interpretation or design-making etc. AI works and gives the best results possible by analyzing tons of data, and thats how it can excel in architecture.

Read More: Mobile Advertising Needs More Than Just 5G

While creating new designs, architects usually go through past designs and the data prepared throughout the making of the building. Instead of investing a lot of time and energy to create something new, it is alleged that a computer will be able to analyze the data in a short time period and will give recommendations accordingly. With this, an architect will be able to do testing and research simultaneously and sometimes even without pen and paper. It seems like it will lead to the organizations or the clients to revert to computers for masterplans and construction.

However, the value of architects and human efforts of analyzing a problem and finding the perfect solutions will always remain unchallenged.

Read More: How Automating Procurement is Like Self-Driving Cars

Parametric architecture is a hidden weapon that allows an architect to change specific parameters to create various types of output designs and create such structures that would not have been imagined earlier. It is like an architects programming language.

It allows an architect to consider a building and reframe it to fit into some other requirements. A process like this allows Artificial Intelligence to reduce the effort of an architect so that the architect can freely think about different ideas and create something new.

Constructing a building is not a one-day task as it needs a lot of pre-planning. However, this pre-planning is not enough sometimes, and you need a little bit of more effort to get an architects opinion to life. Artificial Intelligence will make an architects work significantly easier by analyzing the whole data and creating models that can save a lot of time and energy of the architect.

All in all, AI can be called an estimation tool for various aspects while constructing a building. However, when it comes to the construction part, AI can help so that human efforts become negligible.

The countless hours of research at the starting of any new project is where AI steps in and makes it easy for the architect by analyzing the aggregate data in millisecond and recommending some models so that the architect can think about the conceptual design without even using the pen or paper.

Just like while building a home for a family, if you have the whole information about the requirements of the family, you can simply pull all zoning data using AI and generate design variations in a short time period.

This era of modernization demands everything to be smartly designed. Just like smart cities, todays high technology society demands smart homes. However, now architects do not have to bother about how to use AI to create the designs of home only, but they should worry about making the users experience worth paying.

Change is something that should never change. The way your city looks today will be very different in the coming time. The most challenging task for an architect is city planning that needs a lot of precision planning. However, the primary task is to analyze all the possible aspects, and understand how a city will flow, how the population is going to be in the coming years.

All these factors are indicating one thing only, i.e., the future architects will give fewer efforts in the business of drawing and more into satisfying all the requirements of the user with the help of Artificial Intelligence.

Read More: How AI and Automation Are Joining Forces to Transform ITSM

Originally posted here:

How is Artificial Intelligence (AI) Changing the Future of Architecture? - AiThority

Posted in Artificial Intelligence | Comments Off on How is Artificial Intelligence (AI) Changing the Future of Architecture? – AiThority

Chanukah and the Battle of Artificial Intelligence – The Ultimate Victory of the Human Being – Chabad.org

Posted: at 6:51 am

Chanukah is generally presented as a commemoration of a landmark victory for religious freedom and human liberty in ancient times. Big mistake. Chanukahs greatest triumph is still to comethe victory of the human soul over artificial intelligence.

Jewish holidays are far more than memories of things that happened in the distant pastthey are live events taking place right now, in the ever-present. As we recite on Chanukahs parallel celebration, Purim, These days will be remembered and done in every generation. The Arizal explains: When they are remembered, they reenact themselves.

And indeed, the battle of the Maccabees is an ongoing battle, oneThe battle of the Maccabees is an ongoing battle embedded deep within the fabric of our society. embedded deep within the fabric of our society, one that requires constant vigilance lest it sweep away the foundations of human liberty. It is the struggle between the limitations of the mind and the infinite expanse that lies beyond the minds restrictive boxes, between perception and truth, between the apparent and the transcendental, between reason and revelation, between the mundane and the divine.

Today, as AI development rapidly accelerates, we may be participants in yet a deeper formalization of society, the struggle between man and machine.

Let me explain what I mean by the formalization of society. Formalization is something the manager within us embraces, and something the incendiary, creative spark within that manager defies. Its why many bright kids dont do well in school, why our most brilliant, original minds are often pushed aside for promotions while the survivors who follow the book climb high, why ingenuity is lost in big corporations, and why so many of us are debilitated by migraines. Its also a force that bars anything transcendental or divine from public dialogue.

Formalization is the strangulation of life by reduction to standard formulas. ScientistsFormalization is the strangulation of life by reduction to standard formulas. reduce all change to calculus, sociologists reduce human behavior to statistics, AI technologists reduce intelligence to algorithms. Thats all very usefulbut it is no longer reality. Reality is not reducible, because the only true model of reality is reality itself. And what else is reality but the divine, mysterious and wondrous space in which humans live?

Formalization denies that truth. To reduce is useful, to formalize is to kill.

Formalization happens in a mechanized society because automation demands that we state explicitly the rules by which we work and then set them in silicon. It reduces thought to executable algorithms; behaviors to procedures, ideas to formulas. Thats fantastic because it potentially liberates us warm, living human beings from repetitive tasks that can be performed by cold, lifeless mechanisms so we may spend more time on those activities that no algorithm or formula could perform.

Potentially. The default, however, without deliberate intervention, is the edifice complex.

The edifice complex is what takes place when we create a device, institution or any other formal structurean edificeto more efficiently execute some mandate. That edifice then develops a mandate of its ownthe mandate to preserve itself by the most expedient means. And then, just as in the complex it sounds like, The Edifice Inc., with its new mandate, turns around and suffocates to deathThe Edifice Inc., with its new mandate, turns around and suffocates to death the original mandate for which it was created. the original mandate for which it was created.

Think of public education. Think of many of our religious institutions and much of our government policy. But also think of the general direction that industrialization and mechanization has led us since the Industrial Revolution took off 200 years ago.

Its an ironic formula. Ever since Adam named the animals and harnessed fire, humans have built tools and machines to empower themselves, to increase their dominion over their environment. And, yes, in many ways we have managed to increase the quality of our lives. But in many other ways, we have enslaved ourselves to our own servantsto the formalities of those machines, factories, assembly lines, cost projections, policies, etc. We have coerced ourselves into ignoring the natural rhythms of human life, the natural bonds and covenants of human community, the spectrum of variation across human character and our natural tolerance to that wide deviance, all to conform to those tight formalities our own machinery demands in the name of efficacy.

In his personal notes in the summer of 1944, having barely escaped from occupied France, the Rebbe, Rabbi Menachem M. Schneerson of righteous memory, described a world torn by a war between two ideologiesbetween those for whom the individual was nothing more than a cog in the machinery of the state, and those who understood that there can be no benefit to the state by trampling the rights of any individual. The second ideologythat held by the western Alliesis, the Rebbe noted, a Torah one: If the enemy says, give us one of you, or we will kill you all! declared the sages of the Talmud, Not one soul shall be deliberately surrendered to its death.

Basically, the life of the individual is equal to the whole. Go make an algorithm from that. The math doesntThe life of the individual is equal to the whole. Go make an algorithm from that. The math doesnt work. work. Try to generalize it. You cant. It will generate what logicians call a deductive explosion. Yet it summarizes a truth essential to the sustainability of human life on this planetas that world war demonstrated with nightmarish poignance.

That war continued into the Cold War. It presses on today with the rising economic dominance of the Communist Party of China.

In the world of consumer technology, total dominance of The Big Machine was averted when a small group of individuals pressed forward against the tide by advancing the human-centered digital technology we now take for granted. But yet another round is coming, and it rides on the seductive belief that AI can do its best job by adding yet another layer of formalization to all societys tasks.

Dont believe that for a minute. The telos of technology is to enhance human life, not to restrict it; to provide human beings with tools and devices, not to render them as such.

Technologys ultimate purpose will come in a time of which Maimonides writes, when the occupation of the entire world will be only to know the divine. AI can certainly assist us in attaining that era and living itas long as we remain its masters and do not surrender our dignity as human beings. And that is the next great battle of humanity.

To win this battle, we need once again only a small army, but an army armed with more than vision. They must be people with faith. Faith in the divine spark within the human being. For that is what underpins the security of the modern world.

Pundits will tell you that our modern world is secular. Dont believe them. They will tell you that religion is not taught in American public schools. Its a lie. Western society is sustained on the basis of a foundational, religious belief: that all human beings are equal. Thats a statement withAll human beings are equal. Thats a statement of faith. no empirical or rational support. Because it is neither. It is a statement of faith. Subliminally, it means: The value of a single human life cannot be measured.

In other words, every human life is divine.

No, we dont say those words; there is no class in school discussing our divine image. Yet it is a tacit, unspoken belief. Western society is a church without walls, a religion whose dogmas are never spoken, yet guarded jealously, mostly by those who understand them the least. Pull out that belief from between the bricks and the entire edifice collapses to the ground.

It is also a ubiquitous theme in Jewish practice. As Ive written elsewhere, leading a Jewish way of life in the modern era is an outright rebellion against the materialist reductionism of a formalized society.

We liberate ourselves from interaction with our machines once a week, on Shabbat, and rise to an entirely human world of thought, prayer, meditation, learning, songs, and good company. We insist on making every instance of food consumption into a spiritual, even mystical event, by eating kosherWe liberate ourselves from interaction with our machines once a week. and saying blessings before and after. We celebrate and empower the individual through our insistence that every Jew must study and enter the discussion of the hows and whys of Jewish practice. And on Chanukah, we insist that every Jew must create light and increase that light each day; that none of us can rely on any grand institution to do so in our proxy.

Because each of us is an entire world, as our sages state in the Mishnah, Every person must say, On my account, the world was created.

This is what the battle of Chanukah is telling us. The flame of the menorah, that is the human soul The human soul is a candle of Gd. The war-machine of Antiochus upon elephants with heavy armorthat is the rule of formalization and expedience coming to suffocate the flame. The Maccabee rebels are a small group of visionaries, those who believe there is more to heaven and earth than all science and technology can contain, more to the human soul than any algorithm can grind out, more to life than efficacy.

How starkly poignant it is indeed that practicing, religious Jews were by far the most recalcitrant group in the Hellenist world of the Greeks and Romans.

Artificial intelligence can be a powerful tool for good, but only when wielded by those who embrace a reality beyond reason. And it is that transcendence that Torah preserves within us. Perhaps all of Torah and its mitzvahs were given for this, the final battle of humankind.

Visit link:

Chanukah and the Battle of Artificial Intelligence - The Ultimate Victory of the Human Being - Chabad.org

Posted in Artificial Intelligence | Comments Off on Chanukah and the Battle of Artificial Intelligence – The Ultimate Victory of the Human Being – Chabad.org

Artificial Intelligence, Foresight, and the Offense-Defense Balance – War on the Rocks

Posted: at 6:51 am

There is a growing perception that AI will be a transformative technology for international security. The current U.S. National Security Strategy names artificial intelligence as one of a small number of technologies that will be critical to the countrys future. Senior defense officials have commented that the United States is at an inflection point in the power of artificial intelligence and even that AI might be the first technology to change the fundamental nature of war.

However, there is still little clarity regarding just how artificial intelligence will transform the security landscape. One of the most important open questions is whether applications of AI, such as drone swarms and software vulnerability discovery tools, will tend to be more useful for conducting offensive or defensive military operations. If AI favors the offense, then a significant body of international relations theory suggests that this could have destabilizing effects. States could find themselves increasingly able to use force and increasingly frightened of having force used against them, making arms-racing and war more likely. If AI favors the defense, on the other hand, then it may act as a stabilizing force.

Anticipating the impact of AI on the so-called offense-defense balance across different military domains could be extremely valuable. It could help us to foresee new threats to stability before they arise and act to mitigate them, for instance by pursuing specific arms agreements or prioritizing the development of applications with potential stabilizing effects.

Unfortunately, the historical record suggests that attempts to forecast changes in the offense-defense balance are often unsuccessful. It can even be difficult to detect the changes that newly adopted technologies have already caused. In the lead-up to the First World War, for instance, most analysts failed to recognize that the introduction of machine guns and barbed wire had tilted the offense-defense balance far toward defense. The years of intractable trench warfare that followed came as a surprise to the states involved.

While there are clearly limits on the ability to anticipate shifts in the offense-defense balance, some forms of technological change have more predictable effects than others. In particular, as we argue in a recent paper, changes that essentially scale up existing capabilities are likely to be much easier to analyze than changes that introduce fundamentally new capabilities. Substantial insight into the impacts of AI can be achieved by focusing on this kind of quantitative change.

Two Kinds of Technological Change

In a classic analysis of arms races, Samuel Huntington draws a distinction between qualitative and quantitative changes in military capabilities. A qualitative change involves the introduction of what might be considered a new form of force. A quantitative change involves the expansion of an existing form of force.

Although this is a somewhat abstract distinction, it is easy to illustrate with concrete examples. The introduction of dreadnoughts in naval surface warfare in the early twentieth century is most naturally understood as a qualitative change in naval technology. In contrast, the subsequent naval arms race which saw England and Germany competing to manufacture ever larger numbers of dreadnoughts represented a quantitative change.

Attempts to understand changes in the offense-defense balance tend to focus almost exclusively on the effects of qualitative changes. Unfortunately, the effects of such qualitative changes are likely to be especially difficult to anticipate. One particular reason why foresight about such changes is difficult is that the introduction of a new form of force from the tank to the torpedo to the phishing attack will often warrant the introduction of substantially new tactics. Since these tactics emerge at least in part through a process of trial and error, as both attackers and defenders learn from the experience of conflict, there is a limit to how much can ultimately be foreseen.

Although quantitative technological changes are given less attention, they can also in principle have very large effects on the offense-defense balance. Furthermore, these effects may exhibit certain regularities that make them easier to anticipate than the effects of qualitative change. Focusing on quantitative change may then be a promising way forward to gain insight into the potential impact of artificial intelligence.

How Numbers Matter

To understand how quantitative changes can matter, and how they can be predictable, it is useful to consider the case of a ground invasion. If the sizes of two armies double in the lead-up to an invasion, for example, then it is not safe to assume that the effect will simply cancel out and leave the balance of forces the same as it was prior to the doubling. Rather, research on combat dynamics suggests that increasing the total number of soldiers will tend to benefit the attacker when force levels are sufficiently low and benefit the defender when force levels are sufficiently high. The reason is that the initial growth in numbers primarily improves the attackers ability to send soldiers through poorly protected sections of the defenders border. Eventually, however, the border becomes increasingly saturated with ground forces, eliminating the attackers ability to exploit poorly defended sections.

Figures 1: A simple model illustrating the importance of force levels. The ability of the attacker (in red) to send forces through poorly defended sections of the border rises and then falls as total force levels increase.

This phenomenon is also likely to arise in many other domains where there are multiple vulnerable points that a defender hopes to protect. For example, in the cyber domain, increasing the number of software vulnerabilities that both an attacker and defender can each discover will benefit the attacker at first. The primary effect will initially be to increase the attackers ability to discover vulnerabilities that the defender has failed to discover and patch. In the long run, however, the defender will eventually discover every vulnerability that can be discovered and leave behind nothing for the attacker to exploit.

In general, growth in numbers will often benefit the attacker when numbers are sufficiently low and benefit the defender when they are sufficiently high. We refer to this regularity as offensive-then-defensive scaling and suggest that it can be helpful for predicting shifts in the offense-defense balance in a wide range of domains.

Artificial Intelligence and Quantitative Change

Applications of artificial intelligence will undoubtedly be responsible for an enormous range of qualitative changes to the character of war. It is easy to imagine states such as the United States and China competing to deploy ever more novel systems in a cat-and-mouse game that has little to do with quantity. An emphasis on qualitative advantage over quantitative advantage is a fairly explicit feature of the American military strategy and has been since at least the so-called Second Offset strategy that emerged in the middle of the Cold War.

However, some emerging applications of artificial intelligence do seem to lend themselves most naturally to competition on the basis of rapidly increasing quantity. Armed drone swarms are one example. Paul Scharre has argued that the military utility of these swarms may lie in the fact that they offer an opportunity to substitute quantity for quality. A large swarm of individually expendable drones may be able to overwhelm the defenses of individual weapon platforms, such as aircraft carriers, by attacking from more directions or in more waves than the platforms defenses are capable of managing. If this method of attack is in fact viable, one could see a race to build larger and larger swarms that ultimately results in swarms containing billions of drones. The phenomenon of offensive-then-defensive scaling suggests that growing swarm sizes could initially benefit attackers who can focus their attention increasingly intensely on less well-defended targets and parts of targets before potentially allowing defensive swarms to win out if sufficient growth in numbers occurs.

Automated vulnerability discovery tools also stand out as another relevant example, which have the potential to vastly increase the number of software vulnerabilities that both attackers and defenders can discover. The DARPA Cyber Grand Challenge recently showcased machine systems autonomously discovering, patching, and exploiting software vulnerabilities. Recent work on novel techniques such as deep reinforcement fuzzing also suggests significant promise. The computer security expert Bruce Schneier has suggested that continued progress will ultimately make it feasible to discover and patch every single vulnerability in a given piece of software, shifting the cyber offense-defense balance significantly toward defense. Before this point, however, there is reason for concern that these new tools could initially benefit attackers most of all.

Forecasting the Impact of Technology

The impact of AI on the offense-defense balance remains highly uncertain. The greatest impact might come from an as-yet-unforeseen qualitative change. Our contribution here is to point out one particularly precise way in which AI could impact the offense-defense balance, through quantitative increases of capabilities in domains that exhibit offensive-then-defensive scaling. Even if this idea is mistaken, it is our hope that by understanding it, researchers are more likely to see other impacts. In foreseeing and understanding these potential impacts, policymakers could be better prepared to mitigate the most dangerous consequences, through prioritizing the development of applications that favor defense, investigating countermeasures, or constructing stabilizing norms and institutions.

Work to understand and forecast the impacts of technology is hard and should not be expected to produce confident answers. However, the importance of the challenge means that researchers should still try while doing so in a scientific, humble way.

This publication was made possible (in part) by a grant to the Center for a New American Security from Carnegie Corporation of New York. The statements made and views expressed are solely the responsibility of the author(s).

Ben Garfinkel is a DPhil scholar in International Relations, University of Oxford, and research fellow at the Centre for the Governance of AI, Future of Humanity Institute.

Allan Dafoe is associate professor in the International Politics of AI, University of Oxford, and director of the Centre for the Governance of AI, Future of Humanity Institute. For more information, see http://www.governance.ai and http://www.allandafoe.com.

Image: U.S. Air Force (Photo by Tech. Sgt. R.J. Biermann)

Read more:

Artificial Intelligence, Foresight, and the Offense-Defense Balance - War on the Rocks

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence, Foresight, and the Offense-Defense Balance – War on the Rocks

AI Warning: Compassionless world-changing A.I. already here -You WONT see them coming – Express.co.uk

Posted: at 6:51 am

Fear surrounding artificial intelligence has remained prevalent as society has witnessed the mass leaps the technology sector has made in recent years. Shadow Robot Company Director, Rich Walker explained it is not evil A.I. people should necessarily be afraid of but rather the companies they masquerade behind. During an interview with Express.co.uk, Mr Walker explained advanced A.I. that had nefarious intent for mankind would not openly show itself.

He noted companies that actively do harm to society and people within them would be more appealing to A.I. that had goals of destroying humanity.

He said: There is the kind of standard fear of A.I. that comes from science fiction.

Which is either the humanoid robot, like from the Terminator, takes over and tries to destroy humanity.

Or it is the cold compassionless machine that changes the world around it in its own image and there is no space for humans in there.

DON'T MISS:Elon Musk issues terrifying prediction on 'AI robot swarms'

There is actually quite a good argument that there are cold compassionless machines that change the world around us in their own image.

They are called corporations.

We shouldnt necessarily worry about A.I as something that will come along and change everything.

We already have these organisations that will do that.

They operate outside of national rules of laws and societal codes of conduct.

So, A.I. is not the bit that makes that happen, the bits that make that happen are already in place.

He later added: I guess you could say that a company that has known for 30 years that climate change was inevitable and has systematically defunded research into climate change and funded research that shows climate change isnt happening is the kind of organisation I am thinking of.

That is the kind of behaviour you have to say: That is trying to destroy humanity.

DON'T MISSTESS satellite presents stunning new southern sky mosaic[VIDEO]Life discovered deep underground points to subterranean Galapagos'[INTERVIEW]Shadow land: Alien life can exist in 2D universe'[INTERVIEW]

They would argue no they are not trying to do that but the fact would be the effects of what you are doing is trying to destroy humanity.

If you wanted to have an Artificial Intelligence that was a bad guy, a large corporation that profits from fossil fuels and systematically hid the information that fossil fuels were bad for the planet, that would be an A.I bad guy in my book.

The Shadow Robot Company has directed there focus on creating complex dexterous robot hands that mimicked humans hands.

The robotics company uses tactical Telerobot technology to demonstrate how A.I programmes can be used alongside human interaction to create complex robotic relationship.

Excerpt from:

AI Warning: Compassionless world-changing A.I. already here -You WONT see them coming - Express.co.uk

Posted in Artificial Intelligence | Comments Off on AI Warning: Compassionless world-changing A.I. already here -You WONT see them coming – Express.co.uk

China should step up regulation of artificial intelligence in finance, think tank says – Reuters

Posted: at 6:51 am

QINGDAO, China/BEIJING (Reuters) - China should introduce a regulatory framework for artificial intelligence in the finance industry, and enhance technology used by regulators to strengthen industry-wide supervision, policy advisers at a leading think tank said on Sunday.

FILE PHOTO: China Securities Regulatory Commission Chairman Xiao Gang addresses the Asian Financial Forum in Hong Kong January 19, 2015. REUTERS/Bobby Yip/File Photo

We should not deify artificial intelligence as it could go wrong just like any other technology, said the former chief of Chinas securities regulator, Xiao Gang, who is now a senior researcher at the China Finance 40 Forum.

The point is how we make sure it is safe for use and include it with proper supervision, Xiao told a forum in Qingdao on Chinas east coast.

Technology to regulate intelligent finance - referring to banking, securities and other financial products that employ technology such as facial recognition and big-data analysis to improve sales and investment returns - has largely lagged development, showed a report from the China Finance 40 Forum.

Evaluation of emerging technologies and industry-wide contingency plans should be fully considered, while authorities should draft laws and regulations on privacy protection and data security, the report showed.

Lessons should be learned from the boom and bust of the online peer-to-peer (P2P) lending sector where regulations were not introduced quickly enough, said economics professor Huang Yiping at the National School of Development of Peking University.

Chinas P2P industry was once widely seen as an important source of credit, but has lately been undermined by pyramid-scheme scandals and absent bosses, sparking public anger as well as a broader government crackdown.

Changes have to be made among policy makers, said Zhang Chenghui, chief of the finance research bureau at the Development Research Institute of the State Council.

We suggest regulation on intelligent finance to be written in to the 14th five-year plan of the countrys development, and each financial regulator - including the central bank, banking and insurance regulators and the securities watchdog - should appoint its own chief technology officer to enhance supervision of the sector.

Zhang also suggested the government brings together the data platforms of each financial regulatory body to better monitor potential risk and act quickly as problems arise.

Reporting by Cheng Leng in Qingdao, China, and Ryan Woo in Beijing; Editing by Christopher Cushing

See the article here:

China should step up regulation of artificial intelligence in finance, think tank says - Reuters

Posted in Artificial Intelligence | Comments Off on China should step up regulation of artificial intelligence in finance, think tank says – Reuters

Why video games and board games arent a good measure of AI intelligence – The Verge

Posted: at 6:51 am

Measuring the intelligence of AI is one of the trickiest but most important questions in the field of computer science. If you cant understand whether the machine youve built is cleverer today than it was yesterday, how do you know youre making progress?

At first glance, this might seem like a non-issue. Obviously AI is getting smarter is one reply. Just look at all the money and talent pouring into the field. Look at the milestones, like beating humans at Go, and the applications that were impossible to solve a decade ago that are commonplace today, like image recognition. How is that not progress?

Another reply is that these achievements arent really a good gauge of intelligence. Beating humans at chess and Go is impressive, yes, but what does it matter if the smartest computer can be out-strategized in general problem-solving by a toddler or a rat?

This is a criticism put forward by AI researcher Franois Chollet, a software engineer at Google and a well-known figure in the machine learning community. Chollet is the creator of Keras, a widely used program for developing neural networks, the backbone of contemporary AI. Hes also written numerous textbooks on machine learning and maintains a popular Twitter feed where he shares his opinions on the field.

In a recent paper titled On the Measure of Intelligence, Chollet also laid out an argument that the AI world needs to refocus on what intelligence is and isnt. If researchers want to make progress toward general artificial intelligence, says Chollet, they need to look past popular benchmarks like video games and board games, and start thinking about the skills that actually make humans clever, like our ability to generalize and adapt.

In an email interview with The Verge, Chollet explained his thoughts on this subject, talking through why he believes current achievements in AI have been misrepresented, how we might measure intelligence in the future, and why scary stories about super intelligent AI (as told by Elon Musk and others) have an unwarranted hold on the publics imagination.

This interview has been lightly edited for clarity.

In your paper, you describe two different conceptions of intelligence that have shaped the field of AI. One presents intelligence as the ability to excel in a wide range of tasks, while the other prioritizes adaptability and generalization, which is the ability for AI to respond to novel challenges. Which framework is a bigger influence right now, and what are the consequences of that?

In the first 30 years of the history of the field, the most influential view was the former: intelligence as a set of static programs and explicit knowledge bases. Right now, the pendulum has swung very far in the opposite direction: the dominant way of conceptualizing intelligence in the AI community is the blank slate or, to use a more relevant metaphor, the freshly-initialized deep neural network. Unfortunately, its a framework thats been going largely unchallenged and even largely unexamined. These questions have a long intellectual history literally decades and I dont see much awareness of this history in the field today, perhaps because most people doing deep learning today joined the field after 2016.

Its never a good thing to have such intellectual monopolies, especially as an answer to poorly understood scientific questions. It restricts the set of questions that get asked. It restricts the space of ideas that people pursue. I think researchers are now starting to wake up to that fact.

In your paper, you also make the case that AI needs a better definition of intelligence in order to improve. Right now, you argue, researchers focus on benchmarking performance in static tests like beating video games and board games. Why do you find this measure of intelligence lacking?

The thing is, once you pick a measure, youre going to take whatever shortcut is available to game it. For instance, if you set chess-playing as your measure of intelligence (which we started doing in the 1970s until the 1990s), youre going to end up with a system that plays chess, and thats it. Theres no reason to assume it will be good for anything else at all. You end up with tree search and minimax, and that doesnt teach you anything about human intelligence. Today, pursuing skill at video games like Dota or StarCraft as a proxy for general intelligence falls into the exact same intellectual trap.

This is perhaps not obvious because, in humans, skill and intelligence are closely related. The human mind can use its general intelligence to acquire task-specific skills. A human that is really good at chess can be assumed to be pretty intelligent because, implicitly, we know they started from zero and had to use their general intelligence to learn to play chess. They werent designed to play chess. So we know they could direct this general intelligence to many other tasks and learn to do these tasks similarly efficiently. Thats what generality is about.

But a machine has no such constraints. A machine can absolutely be designed to play chess. So the inference we do for humans can play chess, therefore must be intelligent breaks down. Our anthropomorphic assumptions no longer apply. General intelligence can generate task-specific skills, but there is no path in reverse, from task-specific skill to generality. At all. So in machines, skill is entirely orthogonal to intelligence. You can achieve arbitrary skills at arbitrary tasks as long as you can sample infinite data about the task (or spend an infinite amount of engineering resources). And that will still not get you one inch closer to general intelligence.

The key insight is that there is no task where achieving high skill is a sign of intelligence. Unless the task is actually a meta-task, that involves acquiring new skills over a broad [range] of previously unknown problems. And thats exactly what I propose as a benchmark of intelligence.

If these current benchmarks dont help us develop AI with more generalized, flexible intelligence, why are they so popular?

Theres no doubt that the effort to beat human champions at specific well-known video games is primarily driven by the press coverage these projects can generate. If the public wasnt interested in these flashy milestones that are so easy to misrepresent as steps toward superhuman general AI, researchers would be doing something else.

I think its a bit sad because research should about answering open scientific questions, not generating PR. If I set out to solve Warcraft III at a superhuman level using deep learning, you can be quite sure that I will get there as long as I have access to sufficient engineering talent and computing power (which is on the order of tens of millions of dollars for a task like this). But once Id have done it, what would I have learned about intelligence or generalization? Well, nothing. At best, Id have developed engineering knowledge about scaling up deep learning. So I dont really see it as scientific research because it doesnt teach us anything we didnt already know. It doesnt answer any open question. If the question was, Can we play X at a superhuman level?, the answer is definitely, Yes, as long as you can generate a sufficiently dense sample of training situations and feed them into a sufficiently expressive deep learning model. Weve known this for some time. (I actually said as much a while before the Dota 2 and StarCraft II AIs reached champion level.)

What do you think the actual achievements of these projects are? To what extent are their results misunderstood or misrepresented?

One stark misrepresentation Im seeing is the argument that these high-skill game-playing systems represent real progress toward AI systems, which can handle the complexity and uncertainty of the real world [as OpenAI claimed in a press release about its Dota 2-playing bot OpenAI Five]. They do not. If they did, it would be an immensely valuable research area, but that is simply not true. Take OpenAI Five, for instance: it wasnt able to handle the complexity of Dota 2 in the first place because it was trained with 16 characters, and it could not generalize to the full game, which has over 100 characters. It was trained over 45,000 years of gameplay then again, note how training data requirements grow combinatorially with task complexity yet, the resulting model proved very brittle: non-champion human players were able to find strategies to reliably beat it in a matter of days after the AI was made available for the public to play against.

If you want to one day become able to handle the complexity and uncertainty of the real world, you have to start asking questions like, what is generalization? How do we measure and maximize generalization in learning systems? And thats entirely orthogonal to throwing 10x more data and compute at a big neural network so that it improves its skill by some small percentage.

So what would be a better measure of intelligence for the field to focus on?

In short, we need to stop evaluating skill at tasks that are known beforehand like chess or Dota or StarCraft and instead start evaluating skill-acquisition ability. This means only using new tasks that are not known to the system beforehand, measuring the prior knowledge about the task that the system starts with, and measuring the sample-efficiency of the system (which is how much data is needed to learn to do the task). The less information (prior knowledge and experience) you require in order to reach a given level of skill, the more intelligent you are. And todays AI systems are really not very intelligent at all.

In addition, I think our measure of intelligence should make human-likeness more explicit because there may be different types of intelligence, and human-like intelligence is what were really talking about, implicitly, when we talk about general intelligence. And that involves trying to understand what prior knowledge humans are born with. Humans learn incredibly efficiently they only require very little experience to acquire new skills but they dont do it from scratch. They leverage innate prior knowledge, besides a lifetime of accumulated skills and knowledge.

[My recent paper] proposes a new benchmark dataset, ARC, which looks a lot like an IQ test. ARC is a set of reasoning tasks, where each task is explained via a small sequence of demonstrations, typically three, and you should learn to accomplish the task from these few demonstrations. ARC takes the position that every task your system is evaluated on should be brand-new and should only involve knowledge of a kind that fits within human innate knowledge. For instance, it should not feature language. Currently, ARC is totally solvable by humans, without any verbal explanations or prior training, but it is completely unapproachable by any AI technique weve tried so far. Thats a big flashing sign that theres something going on there, that were in need of new ideas.

Do you think the AI world can continue to progress by just throwing more computing power at problems? Some have argued that, historically, this has been the most successful approach to improving performance. While others have suggested that were soon going to see diminishing returns if we just follow this path.

This is absolutely true if youre working on a specific task. Throwing more training data and compute power at a vertical task will increase performance on that task. But it will gain you about zero incremental understanding of how to achieve generality in artificial intelligence.

If you have a sufficiently large deep learning model, and you train it on a dense sampling of the input-cross-output space for a task, then it will learn to solve the task, whatever that may be Dota, StarCraft, you name it. Its tremendously valuable. It has almost infinite applications in machine perception problems. The only problem here is that the amount of data you need is a combinatorial function of task complexity, so even slightly complex tasks can become prohibitively expensive.

Take self-driving cars, for instance. Millions upon millions of training situations arent sufficient for an end-to-end deep learning model to learn to safely drive a car. Which is why, first of all, L5 self-driving isnt quite there yet. And second, the most advanced self-driving systems are primarily symbolic models that use deep learning to interface these manually engineered models with sensor data. If deep learning could generalize, wed have had L5 self-driving in 2016, and it would have taken the form of a big neural network.

Lastly, given youre talking about constraints for current AI systems, it seems worth asking about the idea of superintelligence the fear that an extremely powerful AI could cause extreme harm to humanity in the near future. Do you think such fears are legitimate?

No, I dont believe the superintelligence narrative to be well-founded. We have never created an autonomous intelligent system. There is absolutely no sign that we will be able to create one in the foreseeable future. (This isnt where current AI progress is headed.) And we have absolutely no way to speculate what its characteristics may be if we do end up creating one in the far future. To use an analogy, its a bit like asking in the year 1600: Ballistics has been progressing pretty fast! So, what if we had a cannon that could wipe out an entire city. How do we make sure it would only kill the bad guys? Its a rather ill-formed question, and debating it in the absence of any knowledge about the system were talking about amounts, at best, to a philosophical argument.

One thing about these superintelligence fears is that they mask the fact that AI has the potential to be pretty dangerous today. We dont need superintelligence in order for certain AI applications to represent a danger. Ive written about the use of AI to implement algorithmic propaganda systems. Others have written about algorithmic bias, the use of AI in weapons systems, or about AI as a tool of totalitarian control.

Theres a story about the siege of Constantinople in 1453. While the city was fighting off the Ottoman army, its scholars and rulers were debating what the sex of angels might be. Well, the more energy and attention we spend discussing the sex of angels or the value alignment of hypothetical superintelligent AIs, the less we have for dealing with the real and pressing issues that AI technology poses today. Theres a well-known tech leader that likes to depict superintelligent AI as an existential threat to humanity. Well, while these ideas are grabbing headlines, youre not discussing the ethical questions raised by the deployment of insufficiently accurate self-driving systems on our roads that cause crashes and loss of life.

If one accepts these criticisms that there is not currently a technical grounding for these fears why do you think the superintelligence narrative is popular?

Ultimately, I think its a good story, and people are attracted to good stories. Its not a coincidence that it resembles eschatological religious stories because religious stories have evolved and been selected over time to powerfully resonate with people and to spread effectively. For the very same reason, you also find this narrative in science fiction movies and novels. The reason why its used in fiction, the reason why it resembles religious narratives, and the reason why it has been catching on as a way to understand where AI is headed are all the same: its a good story. And people need stories to make sense of the world. Theres far more demand for such stories than demand for understanding the nature of intelligence or understanding what drives technological progress.

Read more here:

Why video games and board games arent a good measure of AI intelligence - The Verge

Posted in Artificial Intelligence | Comments Off on Why video games and board games arent a good measure of AI intelligence – The Verge

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test – Inverse

Posted: at 6:51 am

The past decade has seen the rise of remarkably human personal assistants, increasing automation in transportation and industrial environments, and even the alleged passing of Alan Turings famous robot consciousness test. Such innovations have taken artificial intelligence out labs and into our hands.

A.I. programs have become painters, drivers, doctors assistants, and even friends. But with these new benefits have also come increasing dangers. This ending decade saw the first, and likely not the last, death caused by a self-driving car.

This is #20 on Inverses 20 predictions for the 2020s.

And as we head toward another decade of machine learning and robotics research, questions surrounding the moral programming of A.I. and the limits of their autonomy will no longer be just thought-experiments but time-sensitive problem.

One such area to keep on eye on going forward into a new decade will be partially defined by this question: what kind of legal status will A.I. be granted as their capabilities and intelligence continues to scale closer to that of humans? This is a conversation the archipelago nation Malta started in 2018 when its leaders proposed that it should prepare to grant or deny citizenship to A.I.s just as they would humans.

The logic behind this being that A.I.s of the future could have just as much agency and potential to cause disruption as any other non-robotic being. Francois Piccione, policy advisor for the Maltese government, told Inverse in 2019 that not taking such measures would be irresponsible.

Artificial Intelligence is being seen in many quarters as the most transformative technology since the invention of electricity, said Piccione. To realize that such a revolution is taking place and not do ones best to prepare for it would be irresponsible.

While the 2020s might not see fully fledged citizenship for A.I.s, Inverse predicts that there will be increasing legal scrutiny in coming years over who is legally responsible over the actions of A.I., whether it be their owners or the companies designing them. Instead of citizenship or visas for A.I., this could lead to further restrictions on the humans who travel with them and the ways in which A.I. can be used in different settings.

Another critical point of increasing scrutiny in the coming years will be how to ensure A.I. programmers continue to think critically about the algorithms they design.

This past decade saw racism and death as the result of poorly designed algorithms and even poorer introspection. Inverse predicts that as A.I. continues to scale labs will increasingly call upon outside experts, such as ethicists and moral psychologists, to make sure these human-like machines are not doomed to repeat our same, dehumanizing, mistakes.

As 2019 draws to a close, Inverse is looking to the future. These are our 20 predictions for science and technology for the 2020s. Some are terrifying, some are fascinating, and others we can barely wait for. This has been #20. Read a related story here.

See original here:

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test - Inverse

Posted in Artificial Intelligence | Comments Off on In the 2020s, human-level A.I. will arrive, and finally ace the Turing test – Inverse

The Crazy Government Research Projects You Might’ve Missed in 2019 – Nextgov

Posted: at 6:51 am

If you imagine the U.S. research community as a family party, the Defense Advanced Research Projects Agency is your crazy uncle ranting at the end of the table and the governments other ARPA organizations are the in-laws who are buying into his theories.

DARPA and its counterpartsthe Intelligence Advanced Research Projects Activity and the Advanced Research Projects Agency-Energyare responsible for conducting some of the most innovative and bizarre projects in the governments $140 billion research portfolio. DARPAs past research has laid the groundwork for the internet, GPS and other technologies we take for granted today, and though the other organizations are relatively new, theyre similarly charged with pushing todays tech to new heights.

That means the futuristic-sounding projects the agencies are working on today could give us a sneak peek of where the tech industry is headed in the years ahead.

And based on the organizations 2019 research efforts, the future looks pretty wild.

DARPA Pushes the Limits of AI

Last year, DARPA announced it would invest some $2 billion in bringing about the so-called third wave of artificial intelligence, systems capable of reasoning and human-like communication. And those efforts are already well underway.

In March, the agency started exploring ways to improve how AI systems like Siri and Alexa teach themselves language. Instead of crunching gargantuan datasets to learn the ins and outs of a language, researchers essentially want the tech to teach itself by observing the world, just like human babies do. Through the program, AI systems would learn to associate visual cuesphotos, videos and live demonstrationswith audible sounds. Ultimately, the goal is to build tech that actually understand the meaning of what theyre saying.

DARPA also wants AI tools to assess their own expertise and inform their operators know when they dont know something. The Competency-Aware Machine Learning program, launched in February, looks to enable AI systems to model their own behavior, evaluate past mistakes and apply that information to future decisions. If the tech thinks its results could be inaccurate, it would let users know. Such self-awareness will be critical as the military leans on AI systems for increasingly consequential tasks.

One of the biggest barriers to building AI is the amount of computing power required to run them, but DARPA is looking to the insect world to lower that barrier to entry. Through the MicroBRAIN program, the agency is examining the brains of very small flying insects to get inspiration for more energy efficient AI designs.

Beyond improving the tech itself, DARPA is also looking to AI to tackle some of the most pressing problems facing the government today. The agency is funding research to teach computers to automatically detect errors in deepfakes and other manipulated media. Officials are also investing in AI that could help design more secure weapons systems, vehicles and other network-connected platforms.

Outside of artificial intelligence, DARPA is also working to develop a wide-range of other capabilities that sound like they came straight from a sci-fi movie, including but not limited to satellite-repair robots, automated underground mapping technologies and computers powered by biological processes.

IARPA Wants Eyes in the Sky

Today, the intelligence community consumes an immeasurable amount of information, so much that its virtually impossible for analysts to make sense of it in any reasonable amount of time. In this world of data abundance, intelligence officials see AI as a way to stay one step ahead of adversaries, and the tech is a major priority their bleeding-edge research shop.

AI has numerous applications across the national security world, and in 2019, improving surveillance was a major goal.

In April, the Intelligence Advanced Research Projects Activity announced it was pursuing AI that could stitch together and analyze satellite images and footage collected from planes, drones and other aircraft. The program, called Space-based Machine Automated Recognition Technique, essentially looks to use AI to monitor all human activity around the globe in real-time.

The tech would automatically detect and monitor major construction projects and other anthropogenic activity around the planet, merging data from multiple sources and keeping tabs on how sites change over time. Though their scopes somewhat differ, the SMART harkens back to the Air Forces controversial Project Maven program, which sought to use artificial intelligence to automatically analyze video footage collected by drones.

IARPA is also looking to use artificial intelligence to better monitor human activity closer to the ground. In May, the agency started recruiting teams to help train algorithms to follow people as they move through video surveillance networks. According to the solicitation, the AI would piece together footage picked up by security cameras scattered around a particular space, letting agencies track individuals movements in crowded.

Combine this capability with long-range biometric identification systemsa technology IARPA also began exploring in 2019and you could have machines naming people and tracking their movements without spy agencies needing to lift a finger.

The Funding Fight at ARPA-E

The Energy Departments bleeding-edge research office, ARPA-E, is also supporting a wide array of efforts to advance the nations energy technologies. This year, the organization launched programs to improve carbon-capture systems, reduce the cost of nuclear energy and increase the efficiency of the power grid, among other things.

But despite those efforts, the Trump administration has repeatedly tried to shut down the office.

In its budget request for fiscal 2020, the White House proposed reducing ARPA-Es funding by 178%, giving the agency a final budget of negative $287 million. The administration similarly defunded the office in its 2019 budget request.

While its unclear exactly how much funding ARPA-E will receive next year, its safe to say its budget will go up. The Senate opted to increase the agencys funding by $62 million in its 2020 appropriations, and the House version of the legislation included a $59 million increase. In October, the House Science, Space and Technology Committee advanced a bill that would provide the agency with nearly $2.9 billion over the course of five years, though the bill has yet to receive a full vote in the chamber.

Read the original post:

The Crazy Government Research Projects You Might've Missed in 2019 - Nextgov

Posted in Artificial Intelligence | Comments Off on The Crazy Government Research Projects You Might’ve Missed in 2019 – Nextgov

Can AI restore our humanity? – Gigabit Magazine – Technology News, Magazine and Website

Posted: at 6:51 am

Sudheesh Nair, CEO of ThoughtSpot earnestly campaigns for artificial intelligence as a panacea for restoring our humanity - by making us able to do more work.

Whether AI is helping a commuter navigate through a city or supporting a doctors medical diagnosis, it relieves humans from mind-numbing, repetitive and error-prone tasks. This scares some business leaders, who worry AI could make people lazy, feckless and over-dependent. The more utopian minded - me included - see AI improving society and business while individuals get to enjoy happier, more fulfilling lives.

Fortunately, this need not launch yet another polarised debate. The more we apply AI to real world problems, the more glaringly clear it becomes that machine and human intelligence must work together to produce the right outcomes. Humans teach AI to understand context and patterns so that algorithms produce fair, ethical decisions. Equally, AIs blind rationality helps humans overcome destructive failings like confirmation bias.

Crucially, as humans and machines are increasingly able to converse through friendlier interfaces, decision-making improves and consumers are better served. Through this process, AI is already ending what I call the tyranny of averages - where people with similar preferences, habits, or even medical symptoms, get lumped into broad categories and receive identical service or treatment.

Fewer hours, higher productivity

In business AI is taking over mundane tasks like expense reporting and timesheets, along with complex data analysis. This means people can devote time to charity work, spend time with their kids, exercise more or just kick back. In their jobs, they get to do all those human things that often wind up on the back burner, like mentor others and celebrate success. For this reason alone, I see AI as an undeniable force for good.

One strong indicator that AIs benefits are kicking in is that some companies are successfully moving to a four-day workweek. Companies like the American productivity software firm Basecamp and New Zealands Perpetual Guardian are recent poster children for working shorter hours while raising productivity. This has profound implications for countries like Japan, whose economy is among the least productive despite its people notoriously working the longest hours.

SEE ALSO:

However, AI is about more than having to work fewer hours. Having to multitask less means less stress over the possibility of dropping the ball. Workers can focus more on tasks that contribute positively and visibly to their companies success. Thats why more employers are starting to place greater value now on business outcomes and less on presenteeism.

AI and transparency go hand in hand

But we mustnt get complacent or apply AI uniformly. Even though many studies say that AI will create many more jobs than it replaces we have to manage its impact differently depending on the type of work it affects. Manual labourers like factory workers, farmers and truck drivers understandably fear the march of technology. In mass-market industries, technology has often (but not always) completely replaced the clearly defined tasks that these workers carry out repeatedly during their shifts. Employers and governments must work together to communicate honestly to workers about the trajectory of threatened jobs and help them to adapt and develop new skills for the future.

Overcoming the tyranny of averages in service

An area where we risk automating inappropriately is that which includes entry- and mid-level customer service professions like call centre workers, bank managers, and social care providers. Most will agree that automating some formerly personal transactions, like withdrawing cash, turned out pretty well. However higher involvement decisions like buying home insurance or selecting the best credit card usually benefit from having a sympathetic human guide them through to the right decision.

Surprisingly, AI may be able to help re-humanise customer service in these areas threatened by over- or inappropriate automation. Figuring out the right product or service to offer someone with complex needs at the right time, price and place is notoriously hard. Whether its to give a medical diagnosis or recommend pet insurance, AI can give service workers the data they need to provide highly personalised information and expert advice.

There are no simple formulae to apply to the labour market as technology advances and affects all of our lives. While it's becoming clear that the AI's benefits to knowledge workers are almost universally positive, others must get the support to adapt and reskill so they are not left behind.

For consumers, however, AI means being freed from the tyranny of averages that makes so many transactions, particularly with large, faceless organisations so soul-destroying. For this and other reasons I mentioned, I truly believe AI will indeed help restore our humanity

Visit link:

Can AI restore our humanity? - Gigabit Magazine - Technology News, Magazine and Website

Posted in Artificial Intelligence | Comments Off on Can AI restore our humanity? – Gigabit Magazine – Technology News, Magazine and Website

The calming and productive environment of Granite Pathways clubhouse – Manchester Ink Link

Posted: at 6:50 am

At 60 Rogers Street in Manchester, the Granite Pathways clubhouse for adults sits on the second floor of an oddly-shaped building full of sharp edges. A person can approach on foot from the intersection of Lincoln and Valley streets to pass the police station and the citys water treatment plant. The building is on the right side of Hayward Street near a road barrier that prevents through traffic disrupting plant operations. They are open Tuesdays and Thursdays from 9:30 a.m. to 4:30 p.m.

Up a single flight of stairs, the clubhouse can be found behind a large white door. Once inside, an expansive area opens up to a front desk, a computer room, a series of tables, a kitchenette unit, and two refrigerators. Almost immediately upon entering, a new person will be greeted with smiles by any number of friendly female staff. During lunchtime, an aroma of delicious, often improvised, food wafts through the air. Convivial conversation is almost always occurring at any time of the day.

The term clubhouse was coined in 1948 when a group of men recently discharged from a mental health facility in New York decided to help each other get back on their feet and integrate into society. When they proved effective at doing so, the men from New York decided to expand their model in a more organized way. Thus, the Pathways Clubhouse system was born.

With over 200 similar facilities all over the world and two in New Hampshire, Pathways seeks to help unemployed and underemployed people find the motivation to work and advance themselves once again. They do this by utilizing Dr. Albert Banduras Self-Efficacy Theory. The theory, in a nutshell, states the more a person does any activity, the more competent they will become doing it . Practice not only makes perfect, it makes confidence as well.

The basis of Granite Pathways is to improve each persons mental health, first and foremost, by having each member leave their diagnosis at the door. No matter what struggles a person may have had before coming in, once they come in, they are treated like any other person as though nothing was wrong with them to begin with. Members must be clean and dry; sobriety is required for anyone who participates. The staff act as recovery coaches to support individual well-being.

Ann Strachan, the Mental Health Director for Granite Pathways, explains that the clubhouses ultimate goal is to help people with co-occurring disorders achieve the best quality of life possible. She was there to start a clubhouse in Portsmouth in 2014; shes held her current position since 2016. She comes across as an intelligent, resourceful, and capable woman. When she arrives at the Manchester clubhouse, her indefatigable presence provides enrichment and edification to all who come into contact with her.

The Manchester clubhouse previously operated out of Brookside Congregational Church at 2013 Elm Street for a period of five years. At the end of five years, they could not secure funding to move to a new location despite serving 275 people at one time. Until May of 2019, a clubhouse in Manchester did not exist. Granite Pathways, a subsidiary of Fedcap, was able to open in Manchester once more under as part of the $45 million State Opioid Response Grant, and they administer the Doorways in Manchester and Nashua.

The clubhouse in Portsmouth intends to apply for accreditation in 2020, which will allow them to bill Medicaid for services rendered. If they are approved, members will be able to fill out papers describing which activities they participated in and which they didnt. Every activity is voluntary. Membership is free.

Activities are divided into work units. Members volunteer to perform various duties through the day. These include: making lunch, working at the reception desk, cleaning (which is colloquially called germ warfare), running meetings, and working on their computer skills. In the afternoon, an acupuncturist will sometimes come to provide ear acupuncture. This is has been clinically proven to relieve anxiety.

The clubhouse also provides free wi-fi as well as help with employment services. People with co-occurring disorders often live on low income through Social Security, or no income at all. Helping members get back to work is intended to facilitate personal independence and autonomy.

Each month, a meeting is held in the clubhouse to determine how everything is done. During this time, specific issues can be brought up for consideration. Members are encouraged to participate. Decisions are made by consensus. Each members ideas are taken into account. Policies are shaped by what people want and dont want. These are often what to serve for lunch, what snacks to stock, what events might be of interest.

Taken as a whole, the Granite Pathways clubhouse is a safe, nurturing environment in which people can become their best selves. Members who come in are well-fed. Their voices are heard, their concerns addressed. They arent treated as people with disabilities or illnesses. For a short time twice a week, theyre just people. There is, it turns out, emotional abundance to be found in routine simplicity.

Winter Trabex is a freelance writer from Manchester.

Excerpt from:

The calming and productive environment of Granite Pathways clubhouse - Manchester Ink Link

Posted in Germ Warfare | Comments Off on The calming and productive environment of Granite Pathways clubhouse – Manchester Ink Link