How to clone a hard drive or SSD – PC Gamer

You just bought a brand new shiny SSD and want to throw it into your aging mid-tower PC. But wait, the horror of having to reinstall Windows again and all of your applications begins to set in. If you dont want to deal with the hassle of reinstalling Windows, you can use a simple cloning utility to clone your old drive to your new SSD. Weve rounded up three free cloning utilities that are easy to use so you dont have to go through the effort of reinstalling your OS and applications all over again.

Note: Before you attempt to clone your hard drive or SSD, w e highly recommend backing up all your data first. In addition, make sure the drive you are cloning to has enough storage space to take all the cloned data. For instance, you wouldn't want to try and clone a 2TB HDD on to a 256GB SSD now would you?

The first data copying method we'll go over pertains to Samsung Data Migration. So make sure you plop that new Samsung SSD in along with your old OS drive you want to clone from.

Note: You will need a Samsung SSD installed on your machine for this software to work.

Step 1: Download the installer from http://www.samsung.com/global/business/semiconductor/samsungssd/downloads.html

Step 2: Run the installer and click "I accept" at the end of it to agree to the terms and conditions.

Step 3: Once the software is installed, it will launch and ask if you if you want to update to the latest version. Click on Update and you will begin downloading the newest patches for it.

Step 4: After the update is complete the software will have you install patches and will have you agree to the Samsung terms and conditions again.

Step 5: From this window, you will select the Source Disk and Target Disk . The Target Disk must be a Samsung SSD , but the Source Disk can be any C: Drive you currently have your OS on . Once youve selected your disks, you can start cloning by clicking Start and the cloning process will begin. Note: Leave your computer alone while you're cloning the OS, as you may corrupt the clone if other processes are being run at the same time. This goes for the other cloning utilities as well.

After the software is done cloning, you can shut down your PC and boot from your newly-cloned SSD.

The second method we will discuss uses the program Macrium Reflect and will work with any drive, regardless of brand. So before you begin, make sure you plop in that new drive along with your old drive you want to clone from.

Step 1: Go to the free version of Macrium here.

Step 2: Click on the download button in the Macrium Reflect Download Agent and then run the softwares installer.

Note: Make sure to read the fine print throughout the installation process to not install any adware. Cnet's Download.com has become infamous for sneaking it in (Here are some general tips to avoiding installing malware/adware).

Step 3: Open up the software and click on Clone this disk Once you do this the software will let you choose which disks you want as your source and target disks. When you have selected your disks, click next to start cloning your drive.

Macrium Reflect useful tips:

Creating bootable rescue media: Macrium Reflect can also help you make bootable rescue media. This tool is located under Other Tasks. We always recommend making recovery media, just in case your hard drive or SSD fails on you.

Creating an image of your hard drives: Under Backup Tasks, you can also create a disk image of your hard drive or SSD too.

These are but three cloning tools, there are many others such as Seagate's DiscWizard ( for Seagate drives) along with other free storage cloning tools such as G-Parted and Clonezilla.

Read the original here:

How to clone a hard drive or SSD - PC Gamer

Some still attack Darwin and evolution. How can science fight back … – The Guardian

Based on current evidence, Darwins ideas still seem capable of explaining much, if not all, of what we see in nature. Photograph: Philipp Kammerer/Alamy

I can save you the effort of reading AN Wilsons expos on Darwin, which did the rounds over the weekend, characterising the famous scientist as a fraud, a thief, a liar, a racist and a rouser of nazism. Instead, head over to Netflix and watch the creationist made-for-TV movie A Matter of Faith, which covers many of the same arguments and also includes a final scene in which a fictional evolutionary biologist, standing alone in his study, holds a rubber chicken in his hands and finds himself deliberating over the question of which came first, the chicken or the egg. At least that was an original take on these tiresome accusations.

And so, here we are again, quietly drawing breath and smiling politely while the same familiar discoveries about Darwin arise once more. Was the blood spilled by the Nazis on Darwins hands? Did he steal his big idea from others? Is evolution by natural selection a great hoax? Are the Darwinians covering something up? Wilson appears to have hit upon a rich seam of cliches in his five years of research for his book, Charles Darwin: Victorian Mythmaker.

In particular, its nice to see fossils come in for a kicking again. Palaeontology has come up with almost no missing links of the kind Darwinians believe in, pants Wilson. If you too are panting at this notion, I implore you to visit a museum. Visit as many as you can. Better still, collect and study your own fossils they are quite common. In the worlds museums and store-rooms, there are hundreds of millions of them and they all fit into broadly recognisable patterns of geological age and within the framework of what you or I would call evolution. Oh, you meant transitional fossils of whales specifically? Yep, its here. Oh, you meant birds? Here. Oh, you meant primates? Yep. Oh, you meant land fish? Here you go. Oh, you meant early human-like ancestors? Theres a link to more than a million scientific articles about the subject here.

But where are the transitional fossils? comes the familiar cry again. Knowing what I have learned about the intricacy and rarity of fossilisation, if anything would make me genuinely consider the presence of an all-seeing God it would be the discovery of an unbroken chain of 60,000 fossil skeletons, following the strata upwards, going smoothly from species A to species B. But thats not the point, I guess, and Wilson should know it.

Scientists tend to fit into two camps on the issue of how to deal with this familiar kind of Darwin-baiting. In the modern age some, such as the American science communicator Bill Nye, choose to debate the anti-Darwinians on live TV. Others, such as Richard Dawkins, prefer to starve them of the oxygen they require by politely ignoring them a kind of personal exercise in the non-validation of non-scientific ideas. So what is the approach we should take, as everyday lovers of science? I would suggest, and this may sound bold, we simply carry on regardless. Mostly.

Based on current evidence, Darwins ideas still seem capable of explaining much, if not all, of what we see in nature

The truth is that and this is worth saying a million times over most scientists probably dont think about Darwin very much in their day-to-day studies and would consider themselves as much Darwinist as they would round-Earthers or wifi-users. This is, after all, the best working theory we have to understand the nature that we see around us. Also, I think we are all OK with entertaining the idea that, if a more scientifically accurate way of explaining the diversity of life on Earth comes along, Darwin would be ousted. Its just that, based on current evidence, Darwins ideas still seem capable of explaining much, if not all, of what we see in nature. Hence, our kids learn about him in schools and popular science books that refute his influence are treated with understandable confusion, concern or disdain.

Sadly, many people will not find their way to this end-point, so suspicious are they of science, evolution and scientific ideas. For me, one of the most pressing problems in science is how we engage this lost audience, because theyre missing out on a wonderful experience that of chasing real truths about some of the most beautiful and complex repeating patterns in nature, an apparent universal law that many people can and do balance regularly alongside their religious beliefs. For starters, their scepticism could come in quite handy.

So how can we connect with people who shout so loudly about this, sciences greatest apparent conspiracy? How do we draw them in and get them to re-engage with science? Id love to know your thoughts about this. Contrary to the popular belief about those involved in science, I think were open to ideas. So let us know. Youll find us ignorant about a great number of things. Just, unlike some, never wilfully.

Jules Howard is a zoologist and the author of Sex on Earth and Death on Earth

Original post:

Some still attack Darwin and evolution. How can science fight back ... - The Guardian

The evolution of machine learning – TechCrunch

Catherine Dong Contributor

Catherine Dong is a summer associate at Bloomberg Beta and will be working at Facebook as a machine learning engineer.

Major tech companies have actively reoriented themselves around AI and machine learning: Google is now AI-first, Uber has ML running through its veins and internal AI research labs keep popping up.

Theyre pouring resources and attention into convincing the world that the machine intelligence revolution is arriving now. They tout deep learning, in particular, as the breakthrough driving this transformation and powering new self-driving cars, virtual assistants and more.

Despite this hype around the state of the art, the state of the practice is less futuristic.

Software engineers and data scientists working with machine learning still use many of the same algorithms and engineering tools they did years ago.

That is, traditional machine learning models not deep neural networks are powering most AI applications. Engineers still use traditional software engineering tools for machine learning engineering, and they dont work: The pipelines that take data to model to result end up built out of scattered, incompatible pieces. There is change coming, as big tech companies smooth out this process by building new machine learning-specific platforms with end-to-end functionality.

Large tech companies have recently started to use their own centralized platforms for machine learning engineering, which more cleanly tie together the previously scattered workflows of data scientists and engineers.

Machine learning engineering happens in three stages data processing, model building and deployment and monitoring. In the middle we have the meat of the pipeline, the model, which is the machine learning algorithm that learns to predict given input data.

That model is where deep learning would live. Deep learning is a subcategory of machine learning algorithms that use multi-layered neural networks to learn complex relationships between inputs and outputs. The more layers in the neural network, the more complexity it can capture.

Traditional statistical machine learning algorithms (i.e. ones that do not use deep neural nets) have a more limited capacity to capture information about training data. But these more basic machine learning algorithms work well enough for many applications, making the additional complexity of deep learning models often superfluous. So we still see software engineers using these traditional models extensively in machine learning engineering even in the midst of this deep learning craze.

But the bread of the sandwich process that holds everything together is what happens before and after training the machine learning model.

The first stage involves cleaning and formatting vast amounts of data to be fed into the model. The last stage involves careful deployment and monitoring of the model. We found that most of the engineering time in AI is not actually spent on building machine learning models its spent preparing and monitoring those models.

Despite the focus on deep learning at the big tech company AI research labs, most applications of machine learning at these same companies do not rely on neural networks and instead use traditional machine learning models. The most common models include linear/logistic regression, random forests and boosted decision trees. These are the models behind, among other services tech companies use, friend suggestions, ad targeting, user interest prediction, supply/demand simulation and search result ranking.

And some of the tools engineers use to train these models are similarly well-worn. One of the most commonly used machine learning libraries is scikit-learn, which was released a decade ago (although Googles TensorFlow is on the rise).

There are good reasons to use simpler models over deep learning. Deep neural networks are hard to train. They require more time and computational power (they usually require different hardware, specifically GPUs). Getting deep learning to work is hard it still requires extensive manual fiddling, involving a combination of intuition and trial and error.

With traditional machine learning models, the time engineers spend on model training and tuning is relatively short usually just a few hours. Ultimately, if the accuracy improvements that deep learning can achieve are modest, the need for scalability and development speed outweighs their value.

So when it comes to training a machine learning model, traditional methods work well. But the same does not apply to the infrastructure that holds together the machine learning pipeline. Using the same old software engineering tools for machine learning engineering creates greater potential for errors.

The first stage in the machine learning pipeline data collection and processing illustrates this. While big companies certainly have big data, data scientists or engineers must clean the data to make it useful verify and consolidate duplicates from different sources, normalize metrics, design and prove features.

At most companies, engineers do this using a combination SQL or Hive queries and Python scripts to aggregate and format up to several million data points from one or more data sources. This often takes several days of frustrating manual labor. Some of this is likely repetitive work, because the process at many companies is decentralized data scientists or engineers often manipulate data with local scripts or Jupyter Notebooks.

Furthermore, the large scale of big tech companies compounds errors, making careful deployment and monitoring of models in production imperative. As one engineer described it, At large companies, machine learning is 80 percent infrastructure.

However, traditional unit tests the backbone of traditional software testing dont really work with machine learning models, because the correct output of machine learning models isnt known beforehand. After all, the purpose of machine learning is for the model to learn to make predictions from data without the need for an engineer to specifically code any rules. So instead of unit tests, engineers take a less structured approach: They manually monitor dashboards and program alerts for new models.

And shifts in real-world data may make trained models less accurate, so engineers re-train production models on fresh data on a daily to monthly basis, depending on the application. But a lack of machine learning-specific support in the existing engineering infrastructure can create a disconnect between models in development and models in production normal code is updated much less frequently.

Many engineers still rely on rudimentary methods of deploying models to production, like saving a serialized version of the trained model or model weights to a file. Engineers sometimes need to rebuild model prototypes and parts of the data pipeline in a different language or framework, so they work on production infrastructure. Any incompatibility from any stage of the machine learning development process from data processing to training to deployment to production infrastructure can introduce error.

To address these issues, a few big companies, with the resources to build custom tooling, have invested time and engineering effort into creating their own machine learning-specific tools. Their goal is to have a seamless, end-to-end machine learning platform that is fully compatible with the companys engineering infrastructure.

Facebooks FBLearner Flow and Ubers Michelangelo are internal machine learning platforms that do just that. They allow engineers to construct training and validation data sets with an intuitive user interface, decreasing time spent on this stage from days to hours. Then, engineers can train models with (more or less) the click of a button. Finally, they can monitor and directly update production models with ease.

Services like Azure Machine Learning and Amazon Machine Learning are publicly available alternatives that provide similar end-to-end platform functionality but only integrate with other Amazon or Microsoft services for the data storage and deployment components of the pipeline.

Despite all the emphasis big tech companies have placed on enhancing their products with machine learning, at most companies there are still major challenges and inefficiencies in the process. They still use traditional machine learning models instead of more-advanced deep learning, and still depend on a traditional infrastructure of tools poorly suited to machine learning.

Fortunately, with the current focus on AI at these companies, they are investing in specialized tools to make machine learning work better. With these internal tools, or potentially with third-party machine learning platforms that are able to integrate tightly into their existing infrastructures, organizations can realize the potential of AI.

A special thank you to Irving Hsu, David Eng, Gideon Mann and the Bloomberg Beta team for their insights.

Continue reading here:

The evolution of machine learning - TechCrunch

The rapid evolution of big data storage – FedScoop

The data storage landscape is continually changing and in the background, there are a few shifts driving that evolution.

One of those is culture, Shaun Bierweiler, vice president of public sector for Hortonworks, says in an interview with FedScoop Radio. We like to say that every agency is a data agency, and that stems from the evolution and the significance that data has taken in the lives and in the missions of our customers.

With traditional data warehouses in the past, Bierweiler explains, data was used in a very transactional way. But now its at the center of every decision, he says.

To start with, the structure of big data has evolved. Previously, you knew what was going in and what was coming out. Today, now you have data from an infinite number of sources. You have images, you have videos, you have data encrypted within those items, Bierweiler says in the interview. The data itself has become very much more complex in terms of structure.

The volume is, perhaps, the biggest change.

Agencies are drowning in data because theres so much of it, he says. You have to be able to store it, you have to be able to process it. You have to be able to extrapolate the value from that data. And so thats become much more complicated and complex.

Finally, to top that all off, expectations for the use of that data has changed drastically, Bierweiler explains.

Not only do you have more data that has more information that varies much more greatly, but now users expect to do more with it. And they not only expect to do valuable things with their data, but they expect to extrapolate information and sharing data from other users data. What used to be very traditionally stove-piped and siloed now is a mesh of data thats expected to be shared.

With such an array of data types, sizes and uses, Bierweiler advocates for enterprise open source platforms to address users many needs.

If you look at a traditional proprietary technology, the lifecycle for them tends to be much longer, and the development cycle even longer, Bierweiler says. When you get a new release of a proprietary solution, its often with very old or antiquated solutions and its solving the problems that existed when the technologys development model started.

Youre also locked-in to the vendors roadmap, he says.

An enterprise open source platform like Hortonworks harnesses the development model of community people that arent paid by Hortonworks. What you get then is a very open solution that not only solves what people are trying to address today, but problems they foresee for tomorrow, Bierweiler tells FedScoop Radio. And because there arent barriers or proprietary interfaces, it lends itself to a true best-of-breed solution.

Consider everything as possible, he recommends to agencies and offices considering open source. Its often difficult to make that cultural shift from something that youve always done and you convince yourself that thats the only way. Technology has come a very long way and there are creative ways to do things better, cheaper, faster, smarter. So oftentimes, the biggest challenge we have is not a technical hurdle its a cultural shift.

See more about how Hortonworks open source solutionscan help you manage your data.

Go here to read the rest:

The rapid evolution of big data storage - FedScoop

Eoin Morgan: T20 evolution must work in tandem with protection of Test cricket – The Guardian

Englands one-day captain, Eoin Morgan, at the Chance to Shine Street National Finals Day in Wolverhampton. Photograph: Courtesy of Chance to Shine

Eoin Morgan has given a few masterclasses this summer. There was his century against South Africa at Headingley, his 87 against Australia at Edgbaston and his 75 against Bangladesh at The Oval. Then there was the hour he spent at Aldersley leisure centre in Wolverhampton. You may have missed that one. It was during the finals of Chance to Shines street cricket competition, when the kids were taking a break from whacking tape balls around the indoor gym. One asked Morgan which was his favourite shot, another, a young Pakistan fan, what it felt like to be cleaned up by Hasan Ali and a third wanted to know how much Morgan enjoyed playing for his favourite team. Which wasnt England, or Middlesex, but the Kings XI Punjab. It was another little reminder of the ways in which the game is changing.

Chance to Shine cooked up street cricket to give city kids an easy way to get into the game. Its a six-a-side thrash, played with a tape ball and a plastic bat. Morgan gets it. I grew up on a council estate, he says. So I can relate to not having facilities. All he had was a barrel of kit his father kept by the front door. He learned to play on a concrete strip by the side of his house in Rush in North County Dublin. He used to make his own tape balls. But normally Id be bowling against my elder brothers and theyd just whack it out of the garden. Then wed have to get another ball with no tape on it.

Only, Morgan used to dream of playing Test cricket. Most of these kids are hooked on T20. Morgan wanted to be Brian Lara or Graham Thorpe because when he was young England always seemed to be playing West Indies. Which is mad because Thorpes our batting coach now. Not long ago, Thorpe was giving him a few pointers on his pull shot. I was playing it with one leg off the ground, which takes all the power out of your shot. He said that to me and I was like: Hold on, Im sure I had a picture of you on my bedroom wall playing a pull with one leg off the ground and a floppy hat on.

When Morgan was 13, he and his dad met the Ireland coach Adrian Birrell. He had ideas about Ireland moving forward and my dad turned to him and said: Well, he wants to play Test cricket. Adrian turned to him and said: Well, hes 13 years old, how do you know you know you want to play Test cricket? But I just did. I always thought my future was here. Odd how life works out. Morgan came to England because he wanted to play Tests but hes ended up specialising in limited-overs cricket. And now Ireland have Test status. But hes adamant he will never go back.

Morgan is 30, a year older than Dawid Malan, but hes reconciled himself to the idea that he wont play another Test. I came to terms with that when I took the captaincy, he says. Because in order to prove myself to play Test cricket I would need to play more county cricket, which would have meant giving up my one-day position. And Im not willing to do that at the moment. I think what we have with the one-day side is quite special, hopefully were putting a side in the position to compete in 2019. So Im very happy with the path my career has taken.

At the same time, he tells the kids that the three team-mates he admires most are Joe Root, Ben Stokes and Moeen Ali because they play all three formats. I suppose ideally Id like to play all forms but there are not many people that do that any more. Theres a bigger division now than there ever has been between Tests and white-ball cricket, he says. Its becoming a real challenge that. With T20, theres such a shift, to go straight from Tests to T20 is such a jump. So what does Morgan, a pioneer of modern cricket, make of the shibboleth that Test cricket is the pinnacle of the game?

The city T20 competition is going to have a huge impact on our game

Its hard for me to say, he admits. Ive changed my view in the last year or so. Before, we said Test cricket is the best form of the game. But everybody is gearing towards Twenty20 cricket. Morgan has been around. He knows better than most what some of the players in the IPL and the Big Bash think about Test cricket. How do you get people to engage with, say, Test matches between South Africa and the West Indies or Pakistan v New Zealand? How do you make those series relevant? I dont have the answer. I just know that something needs to be done. There has to be a shift or the divide will become bigger and one form will take over. And I dont see Tests taking over.

Morgan is surprised that the swing towards T20 has not started already in England. He says the players he is with at Middlesex have not made the switch yet. But were at a county which does prioritise red-ball cricket. And our young guys coming through, Stevie Eskinazi, Nick Gubbins, George Scott, their priority is still to play Test cricket. Which is interesting because I thought the shift would have been made by now. But Morgan has no doubt it is coming. The impact of T20 cricket, its influence around the world, thats already happened. Were a way behind it in England. But when it comes it shouldnt come as a shock.

Morgan thinks it will show in the next generation. Say youve got the next Ben Stokes at Middlesex. Hes coming through right now and he makes his debut in two years time. The question for him is: yes he wants to play Test cricket but there are only 11 players in the team and Ben Stokes is still around, and then this young kid gets offered a lot of money, life-changing money, to go and do something else. Thats serious pressure. Its not an easy decision. And the answer depends on what background he comes from and where his principles lie.

A lot of young players around the world are in that position already. Thats where the future problem lies. Its already happening in the West Indies and in other countries that dont prioritise Test cricket.

England still draw crowds for Test matches but that will not make them immune. We will get guys who come along and say they only want to play T20 cricket. We will lose international players because they feel they have a limited amount of time and they want to make the most of their careers or because their priorities lie elsewhere because its not about playing for England, its about making money. Thats already happening around the rest of the world. The England and Wales Cricket Board has three years before it launches its new city-based competition T20 and Morgan says it will need to spend a lot of that time preparing for the impact it will have on Test cricket.

The key question, he says, is how you grab the people who are being engaged by T20 and introduce them to Test cricket, filtering them through at a lower level. Which brings us back to Chance to Shines street cricket. Sunil Narine comes from tape ball. Thats where he learned all his tricks and now his fingers are so strong from squeezing the tennis ball to get spin on it, Morgan says. In the next five years you will see a Sunil Narine playing for England or a guy with a Lasith Malinga action because they played tape ball cricket. Thats the beauty of it. Its instant, its fast, theres no barriers, everyone can play it.

Morgan adds: The city T20 competition is going to have a huge impact on our game. That should allow us to prepare for whats going to happen with the players, to recognise that, yes, the formats are going to get further and further apart. So we should build them both hand in hand, alongside each other, to protect Test cricket. I think thats very important because if we dont do something about it in England, who is?

NatWest has partnered with Chance to Shine as part of its #NoBoundaries campaign, championing diversity and inclusion in cricket

This is an extract taken from The Spin, the Guardians weekly cricket email. To subscribe, just visit this page and follow the instructions.

See original here:

Eoin Morgan: T20 evolution must work in tandem with protection of Test cricket - The Guardian

Business times are a changin’ – White Bear Press

The old adage of evolve or dissolve has always been a part of the challenges that face businesses. This rings truer today than maybe ever before. We see daily the many changes that companies and their leaders must adapt to and prepare for. The most successful businesses must be constantly on the path of meeting their current business goals as well as have the foresight and strategy to look out further ahead to anticipate what others cannot.

The Darwinism of business is stronger than ever in our changing business climate. From issues related to workforce, technology, and governmental policies, times are changing at a rapid rate. This is why many area businesses choose to connect with their local economic development group and Chamber of Commerce to leverage shared knowledge and best practices.

Business as we know it today will not be the same in the next few years. Consumers will be seeing continuing changes in their shopping and dining experiences as technology continues to evolve. Businesses need to make bets on how they will adapt with the changing demographic of upcoming tech savvy generations. A couple areas where change has been happening at an extremely rapid pace is in the workplace culture and workforce development.

In my work, Ive been seeing tremendous business evolution. Here are some trends of note:

Millennials meet Generation Z The genZers have arrived! 2016 marked the first year they entered the workplace while a third of management roles were filled by millennials. What are some of the challenges? For one, there is an ever widening technology gap between younger and older workers. In addition, stereotypes abound between the groups which causes friction. Interestingly enough, both generations agree that they want businesses to transform the office environment, reward employees, embrace flexibility, and take on causes.

The three Ws Workplace wellness and well-being are the three Ws of attraction tool trends. Getting creative with wellness programs is increasingly common. Companies that are leveraging wellness programs find multiple levels of benefits that affect their bottom line including attracting talent, lower absenteeism and lower healthcare costs.

Changing employer/employee contract Believe it or not, regardless of age, the tenure for employees is currently 4.6 years in the U.S. There is no lifetime employment contract and attracting employees is an ongoing activity for all employers regardless if you have current openings. In addition, the work relationship between employers and employees continues to change with more working at home, more operating as independent contractors, and also with employers utilizing technologies to leverage employees in remote locales.

Evolving benefits All age groups, genders, and ethnicities care about fair compensation. Other important factors are healthcare and work flexibility. Studies have shown some employee groups value work flexibility above healthcare and yet only 1/3 of companies even offer it. And those that do, often dont promote it to job seekers. Other new benefits include assistance with student loans and I even heard of a local business thinking about providing car insurance.

While businesses continue to work on meeting the next challenges, especially in the area of workforce development, we have some local successes to celebrate.

Congrats to I.C. System, Reell Precision Manufacturing, and The Specialty Mfg. Company for receiving the Star Tribunes 2017 Top Workplace achievement.

Top Workplaces recognizes the most progressive companies in Minnesota through employee opinions, including employee feedback about workplace culture, the levels of employee engagement, organizational health, and overall satisfaction.

The Northeast Metro community is fortunate to have a vibrant business community continually connecting together to get ahead of the curve on what is next around the corner. Regardless of what tomorrow brings, we are all committed to shared success together.

Ling Becker is executive director of the Vadnais Heights Economic Development Corporation.

Read the original post:

Business times are a changin' - White Bear Press

‘Alexa, I’m ready to walk’: Robotics company using Amazon’s AI to help control exoskeleton – GeekWire

The ARKE lower body exoskeleton by Bionik Laboratories. (Bionik Photo)

Its one thing to be wowed by Amazons Alexa and her ability to turn off Katy Perry, or turn on the lights. But what if the voice-activated artificial intelligence could help control a robotic device designed to help people walk?

Thats the hope of Bionik Laboratories, which announced Tuesday that it has integrated Alexa into its ARKE lower body exoskeleton. The product is inclinical development, and the future goal is for individuals who have suffered a spinal cord injury or are otherwise severely impaired in their lower body to gain mobility such as standing and walking.

Bionik says Alexa helps to activate multiple sensors located throughout the ARKE, allowing users to say, Alexa, Im ready to stand or Alexa, Im ready to walk.

We are excited to complete the integration of Amazons Echo and Alexa into our ARKE exoskeleton, combining the power of Amazons voice-activated technologies with our powerful assistive robotic solutions for the next evolution in treating consumer immobility, Bionik co-founder and COO Michal Prywata said in a news release. In building ARKE, we had one goal in mind to empower the user to take back their mobility and regain the ability to complete tasks that the rest of us deem normal, like walking to the refrigerator or going to get the mail. This pairing of our robotic technologies with the power of Amazons Alexa further pushes the boundaries of what technology can do within the home healthcare industry, and we believe we will help many impaired individuals regain the mobility they once lost.

The Verge points out a few caveats when it comes to using Alexa in this manner, including the fact that the exoskeleton has no built-in microphones, so a user would need to access Alexa via nearby Echo or Dot device, or though the Alexa app on a mobile device.

Alexa would also have to stand up to the strict guidelines of medical certification, Prywata told The Verge.Alexa is designed for use in consumer applications. Its a completely different risk profile compared to medical use. You have to make sure everything is perfect [as] youre dealing with peoples lives, he said.

Here is the original post:

'Alexa, I'm ready to walk': Robotics company using Amazon's AI to help control exoskeleton - GeekWire

DunkWorks Seeks To Promote Innovation In Marine Robotics – CapeNews.net

Facilitating and accelerating failure is the underlying purpose of DunkWorks in Woods Hole, a collaborative facility for marine robotics technologists that will open for public membership in September.

DunkWorks is managed by the Center for Marine Robotics on behalf of Woods Hole Oceanographic Institution. The creators of DunkWorks believe that failure is a necessary part of innovation, and thus aim to catalyze the process by helping innovators fail quickly and fail cheaply.

Playing off the skunkworks laboratory model, the makerspace provides the resources and coaching necessary for innovators to test their ideas.

Marine robotics center assistant director Leslie A. McGee gave a presentation on the center to the Falmouth Economic Development and Industrial Corporation Tuesday morning, August 8.

DunkWorks is located within a repurposed space on the WHOI dock, near other machine shops and automated underwater vehicle laboratories. Equipment currently includes a 3-D printer, laser cutter, resin-printer, virtual gaming technology, electrical mechanic working stations, automated mill, lathe, autonomous underwater vehicle station with an overhead crane and woodworking tools. A second-floor loft provides space for collaborative training.

However, the facility is only 60 percent spent, and the robotics center plans to further outfit the DunkWorks after assessing the needs and interests of its users.

The facility is staffed with a guru who provides assistance and training for the laboratory equipment, and helps innovators figure out how to tackle problems. DunkWorks will also offer additional workshops and training to its members.

In addition to developing technologies for the marine robotics industry, the WHOI center hopes that DunkWorks will also promote collaboration within the marine robotics community.

What were trying to do is provide an environment for people to come in, get people out of their garages, out of their labsand move it in here so we create a peer-to-peer environment, so folks can learn from each other, Ms. McGee said.

In addition, individual technologists can save money by conducting some of the engineering work themselves, rather than paying an out-of-house engineering laboratory to complete the work.

Massachusetts Technology Collaborative funded the development of DunkWorks and other projects through a five-year $5 million Robots to the Sea grant to the robotics center in December 2014.

Ms. McGee said the state invests in marine robotics with the explicit intention that institutions in turn drive economic development. Ultimately, accelerated innovation at DunkWorks should also produce advancements in revenues, job creation, average wages, output and investment.

The center plans to charge internal WHOI users a monthly $200 membership fee, and external users a monthly $500 membership fee, with a minimum six-month commitment. Although open to individuals outside WHOI, membership is limited to companies and research communities doing work related to marine robotics.

Initially the facility will be open from 8 AM to 4:30 PM, but the center hopes to eventually provide off-hours access.

Its a giant thinking and collaborative space, and sometimes that doesnt happen between 8 and 4:30. Sometimes at midnight on a Saturday, youre like, Oh, my god, I have an idea, I want to go see whether this thing will work, Ms. McGee said.

The facility had a formal opening on July 31, with a ribbon-cutting by Massachusetts Lieutenant Governor Karyn Polito, but will not offer memberships to the public until September. It has been open to internal users in a discovery period for about two months.

The Falmouth EDIC invited Ms. McGee to speak as part of its ongoing series of presentations by members of the Falmouth business community.

Continue reading here:

DunkWorks Seeks To Promote Innovation In Marine Robotics - CapeNews.net

Robotics institute set to anchor Pittsburgh’s mammoth Almono … – Tribune-Review

Carnegie Mellon University's Advanced Robotics Manufacturing Institute will be the first anchor tenant to set up shop in a former Hazelwood steel mill, officials said Monday.

Donald Smith, president of the Regional Industrial Development Corp., said the institute would occupy about two-thirds of the first of three buildings planned for Mill 19, a former LTV rolling mill.

Gov. Tom Wolf visited the site Monday to examine the mill property owned by the Almono partnership, which includes the Heinz Endowments and Richard King Mellon and Claude Worthington Benedum foundations. RIDC has managed the site.

From the commonwealth's point of view it's a way to renovate, rehabilitate an area that's been not under utilized, (but) unutilized for the last how-many' years, Wolf said. Aesthetically, think of what it means for the appearances in this area, but then it also reconnects the area of Hazelwood. I think what they're trying to do here is an audacious thing: to try to re-establish that connection in a way that pays tribute to Pittsburgh's current incarnation as a high-tech capital.

Almono is planning a $120 million development including light manufacturing, about 2,000 apartments, shops and restaurants on the 178-acre property bordering the Monongahela River.

Plans call for the removal of Mill 19's siding and construction of three separate buildings under the 1,500-foot-long building's steel skeleton.

Solar panels on the western side of the roof should be enough to completely power the first two buildings.

Gary Fedder, CEO of the robotics institute, said ARM and Almono are finalizing lease details.

It's going to happen, but we need to work through a few details, he said. I want this thing built by the end of March. You can do the math and figure out how challenging this is going to be.

CMU in January won more than $253 million in funding to set up the institute. It includes $80 million from the U.S. Defense Department and $173 million from some 200 partner organizations.

The institute will work on integrating robotics and autonomy into manufacturing.

Smith said the Mill 19 design was chosen to maintain Pittsburgh's history as a steel producer and its future as a hub for high-tech manufacturing.

Ride-share giant Uber Technologies has developed a test track for self-driving cars on the Almono site, although Smith said the San Francisco-based company is no longer leasing a railroad roundhouse on the property. Smith said Almono plans to keep the roundhouse.

He said RIDC has scrapped plans to move its offices into Mill 19 because private companies are lining up as potential tenants. He said Almono is negotiating with an international technology company, which he would not name, as a major tenant in the second building.

It doesn't help the world much to have us here, Smith said. It really helps a lot more to have a technology company with a presence.

Wolf, who also toured the Hazelwood business district, said he supports a state Senate proposal to help plug a $3.2 billion gap in the state's $32 billion budget.

The Senate voted for a mix of new taxes and tax increases, including a levy on natural gas extraction.

What I like in the Senate proposal: There is real, recurring revenue, Wolf said. No one likes any taxes, but we're looking for something that has recurring revenue. It's real, it's not smoke and mirrors and it passes that test.

Bob Bauder is a Tribune-Review staff writer. Reach him at 412-765-2312, bbauder@tribweb.com or via twitter @bobbauder.

Excerpt from:

Robotics institute set to anchor Pittsburgh's mammoth Almono ... - Tribune-Review

Technology, robotics, coding and more – Village Living

The school year may have ended on May 23, however teaching never stops. Throughout the summer, the doors of Crestline Elementary were opened for learning.

Teams of Crestline teachers offered camps. Third grade teachers Tara Davis and Laura Rives offered a week long TechCamp for rising third, fourth and fifth graders. This camp provided students an opportunity to learn more about Google Classroom and work within the framework to create, format and share documents and presentations. Most importantly, the curriculum focused on Digital Citizenship, meaning teaching children how to safely research information and pictures.

Fourth grade science teacher Amy Anderson provided two opportunities for Coding and Robotics Camp open to rising first through sixth graders. The children were introduced to and worked with Ozobot, Dash and Dot, 3D Printing, and Osmo. Ozobot and Dash and Dot are interactive robots that allow children to practice coding skills. Osmo is a tool that transforms your iPad into a hands-on learning tool. The basic features focus on math, spelling and drawing. The 3D printer is used to create three-dimensional objects in which layers of material are formed under computer control. All aspects of this camp fostered creativity and problem solving through hands on play.

-Submitted by Caroline Springfield

See more here:

Technology, robotics, coding and more - Village Living

Lonely Planet launches an Instagram-like Trips app – TechCrunch

Lonely Planet has a new app for travel enthusiasts. Called Trips, the app uses an Instagram-like design populated with beautiful images of far away places.

Much like Lonely Planets website, the idea behind Trips is to offer travelers an easy way to share their experiences and discover new areas of the world this time on their smartphones.

However, Instagram already has a healthy amount of travel enthusiasts uploading photos of fantastic places for viewers to check out on a daily basis. National Geographic, a personal favorite, is one of the most popular on the platform, with a following of nearly 80 million. Lonely Planet, by comparison, has about 1.4 million followers on the platform.

Like Instagram, you can heart, share and follow profiles on Trips, as well. But Lonely Planets Daniel Houghton says the intention is not to compete with the social media giant, but to complement it.

Lonely Planet is an O.G. travel site and has its own loyal niche of travel enthusiasts. Perhaps an app focusing precisely on their passion will be well received.

Trips is Lonely Planets second app. The online destinations site launched its first app Guides last year, which provides tips and advice from on-the-ground experts. More than one million people have since downloaded Guides. Lonely Planet hopes Trips will be met with the same success.

So why not just roll Trips features into Guides and make one app? Houghton tells TechCrunchGuides is more of a tool, whereas Trips is geared for sharing content.

The app is pretty easy to use; just download, select profiles that suit your interest and scroll through the feed. From there you can pick from a number of the populated stories, many of which will come with maps, photos and some information on tours and things you might want to check out. I was personally checking out Rainbow Mountain in Peru posted about a day ago while scrolling through the app.

You also can hit the discover icon at the bottom of the app, to the right of the home icon, to search for categories like Adventure or Wildlife and Nature. From there it will lead you to a feed similar to the home feed but with certain trips in mind.

Its pretty easy to publish your own trips, as well. Like Instagram, you just hit the plus-sign icon at the bottom of the screen. The app will require access to your phone camera and then youll be able to add your photos. The app will automatically populate a map of the area and allow you to add content and more info about your trip from there.

The one thing I would say Trips lacks is a search tool. Its fine to scroll through the places the app provides in the feed, but its difficult to look up specific places you are thinking of visiting. If you are like me, youll want the ability to look up a place before planning your trip to see what others have to say about it and look at the photos they took.

For those interested in checking it out yourself, Tripsis now available for free oniOSand will be available on Android later this year.

Here is the original post:

Lonely Planet launches an Instagram-like Trips app - TechCrunch

How Instagram posts reveal whether you have depression: study – Stuff.co.nz

Last updated13:36, August 9 2017

The pictures you post on social media could offer clues into the state of your mental health, according to new research.

Just short of 44,000Instagramimages were examined in a study of 166 people, who were also asked questions about their history of mental health.

Filters, if used at all, prevalence of coloursand how many comments and likes each post received were examined.

Those who had depression typically posted images with darker hues and had fewer faces in their posts. They were also less likely to use filters when editing and uploading photos.

LEV DOLGACHOV/123RF

What you're sharing - or not sharing - on Instagram can offer insight into your mental health state.

READ MORE: *How to useInstagramand Snapchat on a computer *Instagramis the worst social network for young people's mental health *How much cash could you make with yourInstagram?

"When depressed participants did employ filters, they most disproportionately favoured the 'Inkwell' filter, which converts colour photographs to black-and-white images," the authors wrote in the paper published in the journal EPJ Data Science.

Healthy participants favoured the Valencia filter, which lightens the photo.

Chris Danforth

Images with the Valencia filter are likely to be used by those with sound mental health.

For people with depression,theirworld-view is often darker, they added, which could explain the photo filters they tended to choose. Those with a more positive frame of mind posted more frequently.

The researchers were eventually able to create an algorithm that could determine whether or not an Instagramuser would have depression. It had a 70 per cent success rate.

The algorithm studied people with similar qualities, like the fact theywere active on social media and willing to submit information on their mental health, making it difficult to know if it could be applied to the average user.

Chris Danforth

Images with darker hues are more likely to be shared by those suffering depression.

Study author and University of VermontComputational Story Lab co-director Chris Danforth told The Huffington Post:"It shows some promise to the idea that you might be able to build a tool like this to get individuals help sooner".

"The end goal of this would be creating something that monitors a person's voice, how they're moving around and what their social network looks like all the stuff we already reveal to our phones," he said.

"Then that could give doctors a ping to check in or at least some insight. Because maybe there's something going on that even the individual doesn't recognise about their behaviour."

Where to get help:

Lifeline (open 24/7) 0800 543 354

Depression Helpline (open 24/7) 0800 111 757

Kidsline (open 24/7) 0800 543 754. This service is for children aged 5 to 18. Those who ring between 4pm and 9pm on weekdays will speak to a Kidsline buddy. These are specially trained teenage telephone counsellors.

-Stuff

The rest is here:

How Instagram posts reveal whether you have depression: study - Stuff.co.nz

Our brains as hard drives could we delete, modify or add memories and skills? – Genetic Literacy Project

Earlier this year marked the25th anniversary of the airing of The Inner Light, an episode of Star Trek The Next Generation that focused on the brain and the adaptability of the human mind. It may be time to add it to the expanding list offuturistic developments forecast by the iconic television series.

Indeed, our growing understanding of how memories are formed is pushing us toward a day when well be able to scrub disturbing memories from our minds, or even replace them with experiences and skills that would normally take years to learn.

The television episode deals with what happens after the USS Enterprise encounters an alien probe in deep space, Captain Jean-Luc Picard (Patrick Stewart) finds himself on a planet with a humanoid civilizationknown to have gone extinct 1,000 years earlier. The starship commander spends some six decades in his new environment, gradually embracing his new life. He becomes a community leader, a father and grandfather, and a virtuoso on the native flute. Over time, he mourns the death of a close friend and then his wife. He also copes with the reality that the planets changing climate will deny his grandchildren a full life. None of this, however, is real.

After seeing a space probe launch the very probe that the Enterprise encountered in space Picard wakes up on the Enterprise bridge. What felt like 60 years in Picards mind actually transpired over the course of just 25 minutes, during which he appeared to be in coma. The probe was carrying a rather uniquemessage it consisted of the experience of being part of the dying civilization.

Neural interface technology had packed 60 years-worth of experiences into Picards brain, and not just images of people and events. Inside the probe was a Kataanian flute, and Picard was able to play it with the expertise that he had developed in his simulated life. Imagine getting an upload of a new talent or skill into your brain as easily as uploading a computer file.

Could we develop a similar capability? That may depend heavily upona handful of ambitious attempts at brain-computer interfacing. But science is moving in baby steps with other tactics in both laboratory animals and humans.

Thus far, there have beensome notable achievementsin rodent experiments, that havent doen so well withhumans. We dont have a beam that can go into your mind and give you 60 years worth of new experiences. Nevertheless, the emerging picture is that the physical basis of memory is understandable to the point that we should be able to intervene both in producing and eliminating specific memories.

At MITsCenter for Neural Circuit Genetics, for example, scientists have modified memories in mice using an optogenetic interface. This technology involves genetic modification of tissues, in this case within the brain, to express proteins that respond to light. Triggered by implants that deliver laser beams, brain cells can be triggered to be more or less active. In research that has been published in the prestigious journal Nature, the MIT team used the approach in specific brain circuits important to memory consolidation. The researchers wereable to enhance the development of negative memories for instance a shock given to an animals leg and also to covert those negative memories into positive memories. The latter was achieved by letting male mice enjoy some time with females, while nerve cells that usually deliver the negative impulses associated with the former shock were stimulated through the optogenetic interface.

In humans, work with memory modification has involved N-methyl-D-aspartate (NMDA) receptors, which function like little doors for positive ions to move through the membranes that surround neurons. NMDA receptors are affected by glutamate, a neurotransmitter whose effect on the NMDA receptors is enhanced by an antiobiotic called D-cycloserine (DCS). When this happens in an area of the brain called the amygdala, memory consolidation (the stabilization of newly developing memories) is strengthened. Researchers have thus found that DCS can increase effectiveness of whats called exposure therapy, if given within a few hours before commencement of each therapy session. Used to treat anxiety disorders, exposure therapy involves the intentional exposure ofpatientsto the thing that provokes their anxiety. If you fear snakes, for example, the therapist will will show you a snake, from a distance at first. Eventually, you will be asked to hold the snake. The implication of the research is that DCS improves the learning that removes the anxiety in exposure therapy, which also should have implications for other therapies that work based on learning and formation of new memories and associations

Theoretically, [DCS should] facilitate learning processes, so if you can use it to facilitate extinction learning, thats got fantastic clinical implications, noted Mark Bouton, PhD, a University of Vermont professor of psychology was quoted in a review from the American Psychological Association.

But using drugs like DCS could be really tricky, requiring precise adherence to very specific timing and dosage that could vary significantly depending on the clinical setting and even between patients. A 2012study, for example, on patients with post traumatic stress disorder (PTSD) found that DCS actually made things worse.

The same is true when researchers try to exert the opposite effect on memory by way of the NMDA receptors, namely blunt memory consolidation. The agent under study in this case is xenon gas, an anesthetic used in humans. When given to laboratory animals within an hour after after a traumatic event, xenon blocks the memory consolidation that can lead to long-term trauma equivalent to PTSD in humans. Exercise and nutritional factors also play roles in blocking the processes that make psychological trauma worse.

So what we have here is an immature, but real, tool bag of agents that can help and inhibit formation of long-term memory. But it is very incomplete and must work in concert with outside factors includingpsychotherapy or the experiences of ones life. Still, given the rapid development of virtual reality technology its not hard to see thatsupplying theouter stimuli we may very well be toward a time when were able to manage the brains memories.

David Warmflash is an astrobiologist, physician and science writer. BIO. Follow him on Twitter @CosmicEvolution.

Originally posted here:

Our brains as hard drives could we delete, modify or add memories and skills? - Genetic Literacy Project

Q&A With Joey Graceffa: YouTube Showman, Dystopian Novelist, Nail Polish Thought-Leader – Variety

YouTube star Joey Graceffa, with his upswept mane, arched eyebrows and just a glint of madness in his eyes was a natural choice to play the enigmatic host of YouTube Red original series Escape the Night, a mega-collab in which a famous cast member gets killed off each week.

Now in the middle of the season 2 run, Graceffa said he had even more fun the second time around with Escape the Night (for which he also served as executive producer) while continuing to pump out daily comedy vlogs for the 8 million-plus subscribers on his YouTube channel.

Graceffa, whose audience skews three-fourths female and is largely between the ages of 13 and 25, has evolved his YouTube material to keep with the zeitgeist while avoiding controversies that can stir up internet trolls and unwanted publicity. I kind of like to stay out of the drama, he says. I feel like if youre in the drama, yeah, it will get you attention now but that will fade.

And the multihypenate recently added another title: young-adult novelist. Elites of Eden, the second book in his dystopian trilogy, will be released Oct. 3, 2017. The first book in the series, Children of Eden, debuted at No. 1 on the New York Times Best Sellers young-adult hardcover list in October 2016 and remained on the chart for 10 weeks.

Graceffa, 26, sat down with Varietyto talk about his life as a YouTuber, his dream of turning the Eden books into movies, how he manages social media, and his expanding line of merchandise (including nail polish). An edited transcript:

Where do you see yourself in your career right now? I think the cool thing about being a YouTuber is that fact that you arent limited to anything. I mean, you can put your own limits on yourself Im one of the people that doesnt. My channel is a variety of all sorts of things. Its constantly evolved, and I think thats how Ive stayed relevant on the platform for so long, because Im constantly changing what Im doing. Currently, right now, I think its so cool I get to do fun, crazy stuff on my channel daily, but I can also do big projects like Escape the Night with YouTube Red. I think thats where my true passion lies: being a creative, and creating big worlds and stories, and seeing them come to life.

How did you start the Children of Eden book series? Its something that I have had in my mind for a few years. It started as a short film idea. So I had it all in my head; it was just a matter of getting it down. Its just been really cool to start with a small scene, and slowly build up this world, and Ive just finished the second book [Elites of Eden]. Its crazy to think I have one of these out in the world after being an avid reader of the genre in my teenage years. YouTubes allowed me to create this universe that I hope will be able to be turned into a movie like Hunger Games and The Maze Runner and Divergent thats the goal with that, and where I kind of where I want to see myself go.

What happens in the third book?Well, I havent started that. [Laughs] Im still working on it! As Im creating this, its all as if its a movie. I see things very visually. When I sit down, I can just imagine a scene taking place and I trust my instinct.

Have you pitched the books to studios? Not really. Were kind of going the route of trying to attach a producer or a writer, someone I can take with me into these meetings. Its kind of in the beginning phases.

Would you star in it? Id have a role in it, just because I have a love for acting. But its mostly just the creating. I dont mind having other actors play the main characters, as long as I get to have a small little part.

What was your original career goal? Obviously this has been evolving over the years. Before this was even a possibility to be a career, I wanted to go the traditional route of maybe being an actor or being in film somehow. Slowly, I just started to realize that what Im doing is what I love. And why am I trying to go out and attain something that maybe was the mentality of a few years ago of what was the thing to want. After realizing that, I kind of homed in on my channel, and figuring out what I love to do without having to go out and wish for things. And just make them happen myself.

Right after you came out [in 2015], you talked about your concern beforehand about revealing your sexual orientation publicly you said, The internet is full of trolls and haters. Do you still think thats the case? Maybe theyre still there but I just dont see them. YouTube has some tools, you can block out certain comments. I never really see it on Twitter. Of course, when youre a little controversial online, thats when the strangers find you and thats when the hate comes. I think right now Im in a place where I just really have the people who have found me are looking at me, and Im not getting a lot of outside attention from strangers from me acting wild, or being a crazy YouTuber and causing controversy and being problematic. I kind of like to stay out of the drama. I feel like if youre in the drama, yeah, it will get you attention now but that will fade.

How is season 2 of Escape the Night different from the first one? Oh, my gosh. We learned so much from the first season, the second season we could make bigger and better. It was just a bigger production. We also have a great cast this seasontheres a lot more lighthearted moments, because we have such a comedic cast. We have Liza Koshy, the Gabbie Show, and Tyler Oakley who gave so many funny moments. We didnt take ourselves so seriously. Season one was more like murder-mystery vibe; this is more like a group escape.

Who are some YouTubers you would love to get for season 3 of Escape the Night? Well, I havent gotten the OK for season three. [Laughs] But there are a few people I really, really would love. I would love Jenna Marbles I dont know, I dont want to put it out there! Its tough, its a big ask for YouTubers to dedicate the time. Its five days filming straight with night shoots although, if you get killed off the show, you dont have to stay all five days.

How has it done for your brand?For me personally, its opened me up to a newer audience. Just because with all the marketing YouTube is throwing at it. The show is almost like a giant collaboration. With 10 YouTubers, theyre all bringing their audience to the show. Since it lives on my channel, they have to subscribe to get notified, so that was a benefit to me. The first couple days [after Escape the Night season 2 premiered in June], I think I got 100,000 subscribers within the first two days of when it launched.

YouTube is your main platform. How do you think of other platforms in terms of reaching audience?Its day to day, I post daily. Its almost like a routine. Throughout my day, Ill post on Instagram Stories, and Ill throw to my most recent video. I dont use SnapchatIll use the filters and put it on Instagram, but I dont really use the platform. Facebook Ill use just to promote videos. Twitter is really my main place to connect with my audience. They all have their own unique purpose.

Why arent you on Snapchat? Just too much. I mean, Snapchat and Instagram stories are pretty much the same thing. Since I already use Instagram to post pictures, I dont knowits all too much. [Laughs]

How do you manage having to be constantly online? Do you take breaks, like, After 9 p.m., Im not checking anything? The only time its turned off is when Im sleeping. Thats my break. When I was on The Amazing Race [in 2013], I was forced to be disconnected. I had someone else working my social-media accounts for like a month, uploading my videos. I definitely had some moments when I really just wanted to check in. But I dont feel like Im too consumed with my social media. Im pretty good at keeping my phone down.

Would you want to do a network show like Amazing Race again? Yeah, if it fit well and they were open to my world on YouTube, and making sure I can keep that going.

You were a big presence at this years VidCon. How has it changed over the years? What do you like about it, and what do not like about?VidCon, the obvious thing is, its just gotten bigger and bigger. For me personally, its become more of a work thing as opposed to, Im going to go to this event to hang out with my friends, which is what it was the first few years. Its become so work-heavy, doing press, doing panels. Its always amazing to meet your viewers, and I love that part, but its still a business thing. I love going, and this year especially with having my face all over the outside of VidCon, I felt like I was the King of VidCon. But yeah, its work.

For your YouTube channel, where do you draw your ideas? Its a lot of researching YouTube and seeing whats going on, what are the trends, seeing someone elses ideas will inspire an original idea of mine. Its taking ideas from everywhere.

To what extent do you try to create trends? There was one I brought back: men wearing nail polish. I was one of the first guys to bring that back, maybe a year and half ago. It developed into my own line of nail polish, which has been really fun I never thought that would happen.

Any other merchandise? Yeah, it started with [Crystal Wolf] jewelry. Then its slowly growing, Im adding new things I have sunglasses, T-shirts, and pins. I just love wolves, and I love crystals, so: Crystal Wolf.

But you dont want to just push merch on people. Its definitely a delicate balance of not being too in their face. But you know, when I get excited about it, I just want to keep talking about it. So sometimes I feel I can be annoying but its coming from a genuine place because its my own excitement and love for my products.

Graceffa is repped by UTA and managed by Addition LLC.

View original post here:

Q&A With Joey Graceffa: YouTube Showman, Dystopian Novelist, Nail Polish Thought-Leader - Variety

InsurTech Futures: App designed to notify brokers of motor accidents – Insurance Age

The designers say the app has been developed toimprove information gathering after an accident and speed up any subsequent claim.

UK software company Lightstone Systems has launched a new app that sends the details of a drivers motor accident directly to their broker in real time.

Called Mercury Incident Reporter, the app delivers information such as damage to vehicles, injuries, persons involved, witnesses and police attendance coupled with notes, photographs and video recordings direct to thebroker.

The information is stored on a cloud-based dashboard platform calledAurora.

According to the makers, to help and provide guidance to motorists at the time of an incident the app includes an automatic vehicle registration number lookup tool, address lookup, incident location pinpointing, uploading of dash-cam video and roadside recovery telephone number.

Easy Road traffic incidents can be stressful, often involving emergency services, witnesses and third parties, said Andrew Ayres, development lead at Lightstone.

With this in mind, we designed the Mercury app to be very easy to use, allowing the driver to gather quality data in whatever order is most convenient at the time. Weve also automated input wherever possible, to minimise the time and effort needed to collect data.

According to the firm, the appimproves information gathering, speeds up any subsequent claim and provides a powerful compliance tool for fleet customers.

For all the latest industry news direct to your inbox,sign up for our daily newsletter.

Read the original:

InsurTech Futures: App designed to notify brokers of motor accidents - Insurance Age

The best virtual reality headsets you can buy in 2017 – The Telegraph – Telegraph.co.uk

You may need extra controllers to complete your experience and play some of the more advanced titles that are available. The Samsung Gear VR and GoogleDaydreamnow come with small point-and-click controllers for navigating through apps and playing games.

With the PSVR, you can play using your Dualshock PS4 controller, or you can splash out and pick up the VR Aim Controller, which can be used with games like Farpoint, although right now not much else. The controller can be bought for 145.99.

For the Oculus Rift, you can buy Oculus Touch Controllers. Rather than using a handset, these operate in a more similar to real life hand movements,giving the feeling that the virtual hands are actually your own. Oculus Rift Touch Controllers are 130.

You can get a budget Google Cardboard virtual reality headset - or a very similar device on Amazon - for just 15. Google and Samsung's mobile headsets aremore advanced, rounded and comfortable and also cost less than 100.

For a more powerful virtual reality set up, the PlayStation VR and Oculus Rift both cost several hundred pounds, while you will probably want to look into picking up a few extras such as handsets.

The HTC Vive is the most expensive on this list, coming in at more than 750 - and you will need a powerful PC set up to play the headset as well.

Read this article:

The best virtual reality headsets you can buy in 2017 - The Telegraph - Telegraph.co.uk

USC gets inside Sam Darnold’s head with virtual reality film study – Los Angeles Times

Tyson Helton, USCs quarterbacks coach, stood in a film room Monday holding a strange, round gadget that looked like a smaller version of Luke Skywalkers pilot helmet.

Helton said he was going to use it to read minds.

"Before you put this on, Helton said, I can turn this thing anywhere and see where you're looking.

To demonstrate, he rotated the helmet from left to right. On a television monitor next to him, a view of USCs practice field panned in sync, left to right.

The helmet is USCs latest edge: a virtual-reality set that allows quarterbacks to enter each others eyes and take repetitions virtually, and for coaches to follow along, seeing exactly what the quarterback sees.

At each practice this season, a student trails the quarterbacks holding a long boom topped with cameras pointing forward and back. The student holds the boom a few feet above the quarterbacks head. Within an hour after practice, the quarterbacks can don the headset (or watch on an iPad), cue up each play and look around in 360 degrees as if they were back out on the field.

The Trojans have joined a growing number of teams chasing a technological advantage. Stanford, with the company STRIVR, pioneered virtual-reality film study three seasons ago. XOS Digital, USCs vendor for all video, said it counted 25 virtual-reality clients in college and professional football and basketball.

Zach Helfand

The beach city boys used to throw on USC jerseys and run plays in the driveway, all thinking theyd one day make like Matt Leinart or Reggie Bush.

The beach city boys used to throw on USC jerseys and run plays in the driveway, all thinking theyd one day make like Matt Leinart or Reggie Bush. (Zach Helfand)

On Monday, USC provided a glimpse at how its quarterbacks use the system to steal precious practice hours on the virtual field.

Inside the helmet, a glance down revealed the top of a helmet shining in the sun.

"All right now this is on Sam, OK? Helton said.

Quarterback Sam Darnolds hands were outstretched for the snap. Straight ahead were USCs linemen. Through headphones, coaches barked instructions. It was like stepping into Darnolds head or that of some organism floating right above him.

Look to your left, Helton said. A turn of the head showed Deontay Burnett in the slot. Cornerback Ajene Harris lined up opposite Burnett, mirroring him a bad sign for that route.

So right now Sam should say, 'No, I don't have it,' Helton said.

The clip rolled forward. The ball was snapped. Darnold tried Burnett anyway. Harris jumped the pass and nearly intercepted it.

What was he thinking, Helton wanted to know. After practice, Helton ran the play back. He could follow Darnolds head, look at what Darnold looked at: namely, Burnett and Burnett only.

Sam being Sam, he thinks he can fit everything in there, Helton said.

In the film room, Darnold knew his error immediately.

Unlike basketball or baseball players, football players earn only marginal gains training on the field alone. The best learning comes in full team drills. But that takes time and people and carries an injury risk.

So Stanford coach David Shaw, an early investor in STRIVR, which was founded by a former Stanford player and graduate assistant named Derek Belch, started his quarterbacks on virtual reality in 2014 to trick their minds into thinking they were seeing real action.

In the middle of a game, the plays about to start, and he says, Ive been here before. I know whats going to happen. Ive seen this before, Shaw said of his quarterbacks at last years Pac-12 media days. Boom. Change the protection. Touchdown pass.

Bill McCarthy, the football product manager for XOS, said teams have experimented with deploying cameras at different positions such as linebackers or even the personal protector on punt drills.

USC coach Clay Helton said the running backs have found the training particularly useful. Last week, he was excited about experimenting with the linebackers.

"We tried it, said Eric Espinoza, USCs director of football video operations. It just didn't give the look that he wanted, and where we were going to place [the cameraman], the defensive coaches were worried about safeties coming up from behind and hitting him.

Espinoza and another video staffer, Daniel Dmytrisin, crunch all of USCs practice video. Coaches and players hoard, consume and obsess film as if it were legal tender. Film shows which player can win a starting job. It shows which opponent has a tell. It shows what opposing teams will do to break opponents down.

USC records from towers high above its end zones, zoomed out to fit all 22 players. Tyson Helton said he still uses this tape 80% of the time. But it leaves important gaps.

A lot of times when you coach in the film room and you're looking at the video from the angle up top, Helton said, it doesn't tell the true story of what [the quarterback] saw.

For players, standard game film is like a good textbook. Its the foundation. But sometimes what they need is a lab. This is especially true for backups.

"Sam uses it some, but because he's getting a lot of reps and he's a little more experienced player, he already knows what he's done wrong, Helton said. But the beauty of it is the young players, the young quarterbacks, because it allows them to get the closest thing to a live rep as possible."

Jack Sears, USCs freshman quarterback, uses the system more than anyone.

"Jack's a gym rat, Helton said. Jack lives at the office. I mean, literally you have to kick him out, like, Jack go home, man.' Because he enjoys the process. He enjoys it. Right now he doesn't know anything, and he knows he doesn't know anything. So he's trying like hell to get caught up."

Helton cued up a play from a recent practice. The play gave Sears an easy read to either side.

You'll watch Jack's eyes right here, Helton said. Watch him. He goes left with his eyes. He goes right with his eyes. And then back late. You kind of see his head moving a little bit.

With the camera angled down from a few feet over Sears head, its clear that both options are open, but his helmet swivels as if he were shaking off a 3-2 curveball. Sears hesitation let a blitzing linebacker through, so he took off and ran.

To correct these misreads, Sears spends about 20 hours a week watching film on his own, a majority of it in virtual reality.

It is a powerful advantage. The NCAA allows coaches to spend 20 hours a week with players on football-related activities. But Darnold alone takes about half of the repetitions during practice. During the season, his workload bumps to about 75% of repetitions.

As Helton left the film room Monday, Sears walked in, holding a skateboard.

We were just talking about you, Helton said.

See more here:

USC gets inside Sam Darnold's head with virtual reality film study - Los Angeles Times

Take a Virtual Reality Ride Along in a Shelby GT350 – The Drive

Watching a 2017 Ford Mustang Shelby GT350 doing what it was made to do is already a treat to the eyes and ears, but Future Motoring just released a video that makes it a whole new kind of experience. The guys over there mounted a 360-degree camera to the back of a slightly modified Shelby to take us for a virtual reality ride.

The video is set in a rural area on country roads where the GT350 shines. This is a great demonstration of the straight-line performance this Shelby is capable of (not that thats the only thing its good at) Granted, the setting doesnt show us much more than road, trees, and sky, but its still a cool thing to watch. Its especially cool if you have a VR headset. If you dont, you can still drag the view around on YouTube.

As for the car itself, its no ordinary GT350. This mean blue Mustang has been equipped with Ford Performance intake and exhaust making the 5.2-liter flat plane crank Voodoo V-8 under the hood breathe better and sound even more amazing than it does in stock form.

This isnt the first time weve gotten a Mustang VR experience. Back in February, Ford Performance released a video called ReRendezvous which was a 360-degree virtual reality ride through Paris from the point of view of a 2016 Mustang. This GT350 video is a bit lower budget, but it gives us a much more satisfying sound.

Excerpt from:

Take a Virtual Reality Ride Along in a Shelby GT350 - The Drive

How virtual reality and artificial intelligence are changing life experiences – TNW

It might be considered a platitude, but people are always looking for new ways to break away from the monotonous beat of everyday normalcy either temporarily or permanently. According to a 2013 report on drug abuse by the United States government, 9.4 percent (around 24.6 million people) of individuals age 12 or old noted that they had recreationally used a drug within the past month. This tendency to seek life-changing experiences is true whether it concerns things like the countercultural movements of the 1970s which infamously involved controversial music and use of illicit drugs or the technological experiences today.

Most people are fascinated with those experiences that allow them to escape crushing boredom and constancy of regular life. Thats why the prospect of virtual realities and the possibilities of automation afforded by artificial intelligence are so exciting. Here are some of the biggest changes related to these two fields that are quickly arriving with the technological advents of modern society.

In order to understand the importance of the changes that are currently taking place in the field of AI, a brief description of historical approaches to the problem of replicating intelligence is helpful. Lets illustrate these approaches by taking a look at how chess engines function. With regard to chess engines, the goal is clearly defined and the problem is how can we code a machine to make accurate decisions that will lead to a winning outcome despite the difficulty of running large sequence searches through possible move sequences.

In the past, engineers solved this issue through cruder methods that involved the use of decision trees and using certain mathematical methods to guide the chess engines choice and calculation of the best possible move sequences. The issue with this method and the challenge that impacts most AI development efforts is that there needs to be significant amounts of training material in order for the engine to develop sufficient resolution and accuracy in making its choices.

Another limitation that is implicit in these older methods in artificial intelligence is that the methods themselves are static there is no way for the methods to refine themselves without the help of human ingenuity. The concept of machine learning is part of the set of revolutionary methods in artificial intelligence that is addressing this limitation and attempting to surpass it.

So, where does virtual reality come into all of this? Well, to start off virtual reality is similar to artificial intelligence in the sense that the field is still in its development stages. However, virtual reality is in an even earlier stage of nascency.

With the introduction of the popular Oculus Rift to the market, the general population has gotten its first preliminary taste and involvement in virtual reality. Yet, it is apparent the methods for providing a truly fulfilling virtual reality experience are still very rough around the edges with the introduction of hamstrung attempts like Samsungs Gear VR, which is really just you attaching your phones display to your face.

Further along the path of VR development lies the innovative company Guru which aims to advance the integration of VR for exhibits and museums. A key belief of Guru is that the right technology can enhance static works of art, , and Gurus augmented reality platform seeks to bring static art like paintings of historical figures and locations to life. You will feel as if you have been literally transported into a painting as Gurus digitization software intelligently animates the canvas.

What makes Guru possible derives from its blending of the concepts of artificial intelligence and virtual reality. Artificial intelligence is used by Guru to identify major themes in a painting and distinguish between buildings, people, and objects in order to bring them to life. Meanwhile, the design of the platform exists as virtual reality, allowing visitors to easily and intuitively access it.

Recently, the allure and wonder of the culture and history associated with famous artistic works has been lost to the massive leaps in technology. The expectations of the general population have gradually increased with the subtle introduction of these technologies which have become commonplace in the lives of many people. Mythology flourished during the time of the ancient Greeks because of the uncertainty associated with the unexplored areas of nature there could always be the stray nymph running around in a vast forest. But with the certainty provided with technology advancement, that feeling of wonder at the unknown has become rarer over time. Guru allows museums to take that same leap forward in order to connect with their visitors in a manner befitting these technological advents. It amazes and stuns visitors to see this blend of technology and human ingenuity in the palms of their hands. Guru restores to art what technology has replaced our imagination.

Moving away from immersive virtual reality experiences, there are arguably virtual realities that involve the inverse the projection of the virtual into the real. Gatebox and its virtual assistant that can engage you in conversation and control the settings of your home to an extent, based on your preferences, is a good demonstration of pioneering for this specific field. One day, the machine learning methods of artificial intelligence may even be incorporated into the conversational abilities of these assistants to give them an increasingly human-like presence.

Artificial intelligence has experienced a paradigm shift in recent times. This is because the older models of decision making that involve brute force methods or decision-making trees are transitioning over to models that involve the use of neural networks instead. Artificial intelligence methods that incorporate neural networks lead to more precise decision making because they have a number of variable sensors that all go into making a decision much like how a certain proportion of neurons fire in the human brain in response to a situation. This has allowed some programs to perform more complex tasks like the precise identification of human faces.

Another relevant aspect of this shift is how artificial intelligence derives from the application of machine learning. Before, games such chess with relatively fewer calculations required were easily conquered after some decades by chess engines. However, games involving more practice and intuition such as Go have long eluded mastery by machines until Googles DeepMind AI AlphaGo was introduced.

In a surprising turnaround, Googles AI was able to beat one of the leading Go champions, Lee Sedol 4-1 in an exhibition of five games, showing the proficiency and capabilities of these new machine learning methods. Interestingly, Google has also employed these machine learning methods to work with other applications such as in the regulation of its cooling systems to be more efficient.

This post is part of our contributor series. The views expressed are the author's own and not necessarily shared by TNW.

Read next: Apple iPhone 8 image leaks ahead of launch

Original post:

How virtual reality and artificial intelligence are changing life experiences - TNW

Will Virtual Reality Solve Your Conference Call Nightmares? – Fast Company

On Fridays, Nick Loizides shows up for a meeting. He and 30 or so people gather to report bugs on the software theyre beta-testing, get developer updates, and check each others work. Most of them have never met in person and are located around the world. But in these meetings, they talk face-to-face, make eye contact, and watch each others lips move in real time.

As a 3D artist, Loizides is one of the early-invite users for Sansar, a virtual reality world by Linden Lab, the makers of massive-multiplayer social game Second Life. They hold these meetings in virtual reality, where they can travel to the worlds of the testers creationsbeaches, outer space, elaborate rooms. Its as close to teleportation as one can get.

Sixty-three million VR headsets shipped in 2016 (compared to 1.5 billion smartphones), with a lot of that interest around porn and gaming. Companies investing in the technology, like Linden Lab, not surprisingly, swear its coming to your work meetings sooner than later.

Anyone whos ever been in a painfully slow or disjointed Skype call, yelling into the ether, Unmute your mic! knows that the technologyand the user experienceis sorely in need of an update. But will VR solve those frustrations, or just move them to a new, pricier, face-sweatier format?

It gets as close as we can right now to really replicating a face-to-face type of meeting, says Eric Boyd, a professor of marketing at James Madison University. Boyd is guest-editing an upcoming issue of the Journal of Business Research that will focus on virtual reality. You and I, were having this telephone conversation, but the only information were really getting is what each of us is saying. Were missing the body language.

Video calls add a layer of intimacy with facial expressions, but reading someones mood from the neck up on a computer screen isnt always enough. Are they sitting with arms and legs crossed, or are they leaning in, open and receptive? It takes less mental effort when you dont have to interpret and infer information, Boyd said.

Voice and eye-tracking technology give the sense of eye contact and facial expressions.

In addition to adding interactivity and informationVR could especially benefit architects walking through virtual floor-plan renderings with clientsit adds an interpersonal connection that video or phone cant: The freedom of living behind an avatar.

In the virtual world you learn about someone from the inside out because you dont see the person, you see their avatar, whether its a likeness of that person or whatever they want it to be, Loizides said. But theyre much more open to being open. Youre so open because youre protected and safe behind the computer. Youre not actually with that person with your guard up. You can really be free to express anything.

Believers see VR as inevitably world-altering as the smartphone. The first response from many corporations and VR companies I asked about the long-coming VR revolutions first words to me were, Its happening. Its what Bjorn Laurin, VP of product at Linden Lab told me: He predicts virtual-meeting ubiquity for the general publicfor it to become as commonplace as owning an iPhonewithin five to 10 years.

We are still not at the point where people want to hang out in headsets for a long period of time, says Derek Belch, founder and CEO at STRIVR. STRIVR is in the VR game, but not for meetings. Theyre developing training content, for which theres proven benefit over just watching or reading onboarding material. A 30-minute meeting in VR? Not happening anytime soon, Belch said, citing the hardware and comfort of headsets as reasons. Headsets currently weigh about a pound, which sounds light until you have it strapped to your face for an hour.

If the comfort level of the headsets improves to the point where people want to wear them for an entire meeting, then I dont think any of the other factors will be issues.

Boyd also points to the many unknowns in long-duration VR immersion and comfort: Many people experience dizziness or motion sickness even in a tame virtual setting, and its still not clear what the effects of putting a screen an inch from your eyeballs for an hour at a time will do to youophthalmologists say it poses no threat to your eyes, but it can still cause eye fatigue and strain, in the same way staring at any screen might.

The other factor that will determine how widespread the adoption of VR meetings will be is where the trends in remote work go. Some companies are moving away from remote work altogether, in an effort to keep the company culture alive. IBM, one of the pioneers for remote work, recently gave its scattered workforce an ultimatum: Come back to the office or quit. If people decide they still want employees in the office, its going to work against VR to some extent I think, said Boyd. Is this five years or 50 years down the road? A lot of it has to do with business practices and what businesses feel comfortable doing, and not necessarily what technology can do for them.

Five years is optimistic, Boyd said. I think were probably looking more toward eight to 10 years before we really start to see a supply of technology that can support it and people are seeing the benefits and how it can be easily incorporated in their day-to-day life.

Freelance tech, science and culture writer. Find Sam on the Internet: @samleecole.

More

More:

Will Virtual Reality Solve Your Conference Call Nightmares? - Fast Company