Get Ready For Brand New Google Glass

Get ready for the rebirth of Google Glass. Details were recently uncovered that indicate Google is working on an upgraded version. However not aimed at the general consumer, but at the enterprise market.

According to 9to5google, this upcoming edition will pack a larger display prism, Intel Atom CPU and the ability to connect an external battery pack — all upgrades critical for enterprise users.

This model, if real, would be sold to companies looking to equip their workforce with smarter technology. Instead of receiving tweets, these users would get information related to their job. Think warehouse workers. With this version of Google Glass, these works would be able to receive and process orders quicker.

Reportedly the new version of Google Glass also sports improved battery life, partly because of the Intel Atom CPU. The specific clock speed is unknown, but this tiny SoC has proven itself by powering most Android Wear devices. The inclusion of an optional battery pack states Google is aiming this device at industries looking for constant all-day use.

This isn’t the first time an Enterprise-focused Google Glass was uncovered. The WSJ reported in late 2014 that Google was retooling Glass for such a use case and would release the model in 2015.


Google’s surprising VR is the future of virtual reality

Google is the most important tech companies in the world, depending on who you ask.
Between YouTube, Google search and Gmail, the Mountain View, Calif.-based company created, owns and operates much of what we have come to expect from the use of the Internet. Influence of the company is so great that the name is a verb – “Could you googled a good restaurant for tonight, honey”And that’s why it’s such a huge big deal that Google pushes into virtual reality with one of its most important services: YouTube.

The company said participants in the annual I / O developer conference on Thursday that the public could 360 degree upload videos to YouTube from this summer. In addition, a VR-ready version of YouTube will be in the not too distant future in app form.
Setting clear: the world’s most popular video platform will move into virtual reality.
Here is an example of how it will work (using the arrows in the upper left corner to navigate the video in three dimensions):

This is a much more important reason to move than any of the video game and film applications that we’ve seen so far for VR, and the reasons should be clear: cat videos.
That’s no joke. People are much more interested in watching cat videos – and other stuff on YouTube – as they are still in the most popular video games.
For comparison, extremely popular (and free) game “League of Legends” has some 27 million players. YouTube has “over 1 billion” users.
Joking aside, imagine: you buy an inexpensive 360-degree video camera, warm family memories with him you take those moments to private YouTube account your family share. Watching the videos is like it all over again. And not just in a nostalgic way; with 360 degree video and a VR headset, you’re there.
With Google’s “Jump” system, that’s all possible. “Jump” is the initiative Google has visible today offers a turnkey solution for turning 360 degree video on YouTube material. Think of it this way: there is an easy way for complex video easily be turned and distributed, for playback on VR headsets. There is a standardization for the filming, distribution and display 360 degree video.
Sounds boring, right? It is! It’s the boring details behind a massively important initiative of one of the – if not “the” – the most important tech companies in the world.No VR headsets to buy now. You could build a version of Google’s board headset, or buy one of the many versions of it on Amazon from other manufacturers. But that’s not the point.
In the next 12 months, a variety of VR headsets will launch from different companies. The “killer application” is not a dogfighting space shooter for the mainstream, and it will not be a puzzle game. There will be applications such as YouTube and Netflix.Google has been working on this future, and that’s huge.


Sensors, Machine learning and VR : Future of Smartphones

Imagine walking out of an Italian restaurant, and your phone knows where you are. It knows you love gnocchi and even traveled to Milan recently. It not only gives you a voucher, but an immersive experience in which you explore the restaurant virtually to see what people eat and visit to see the kitchen as food is prepared. Tempting?
In the last decade Smartphones have evolved from basic phones to portable entertainment center. We use them in text, watch movies and to occupy ourselves. Now smartphones are going to evolve. Sensor data combined with machine learning and virtual reality will usher in a new wave of commitment, comfort and utility. Interestingly enough, sit much of this technology in our phones now.

Your smartphone is smarter than you think

Most people do not know how smart their phones are, or how much they know about us already. Unlike laptops, modern smart phones with dozens of tiny sensors that allow them to all kinds of data we collect, what we do, and the world around us are packed.
Accelerometers and gyroscopes are over we hear most sensors. These have the ability to gather data about us, even if we do not actively using the phone. But most smart phones also have an image sensor, touch sensor, proximity sensor, and up to 30 different sensors, including GPS for location.
New sensors are being developed all the time. Each opens the door to new possibilities. Chemist at MIT recently a smartphone sensor when the food recognizes become poorly developed. Imagine yourself with your phone to check whether the roast chicken you brought home three days ago is still safe?
Sensors make our phones more aware. But even sensors collect only the raw data. Use the data to use requires machine learning. By searching for patterns in the data intelligent apps can find out whether big or small, whether big or small, and to guess even with gender. It may sound at first spooky, but not so, when you consider how useful apps.

Apps think of the future on their own

The smartest apps will use sensor-based data to context-sensitive information. We have examples of this already in the first generation fitness applications that follow how fast and how far you walk or run seen. And many applications, such as OpenTable, Uber, and Yelp, use GPS as the main component to serve information based on our location.
You may be familiar with Apple’s ibeacon technique already widely used by retailers, airports and even small radio transmitter the NBL and NFL to provide fine-tuned content to your Smartphone based on your location.

Some applications are today also crowdsourced sensor data for traffic and weather forecasts. Think about how Google collects Smartphone GPS data and sends it back to the user as accurately route time estimates. Another company, PressureNet working to pull to improve Barometer readings from smartphones to weather and climate predictions.
But mobile applications use tomorrow sensor information in a much larger scale. Theses apps pick up on patterns and routines and to learn the user’s preferences over time. “Anyone can collect data. Finding an automated way to create the meaning that the data is of utmost importance,” says Nils Forsblom, founder of Adtile, a company working on new ways to machine learning and virtual reality for use marketing.

Future applications will usher in a new level of comfort. Instead of asking for input, they will anticipate your needs. The phone can call to voice mail when you drive or switch to Flight mode when it senses an aircraft moving on the tarmac. An application has people talking in a conference room to hear and ask: “Do you want to want to record the meeting ‘?
Virtual reality adds a new creative commitment
But what happens if one sensor data and machine learning to mix with virtual reality? Mobile devices someday deliver experiences and bring inanimate objects to life and let things like walk around a sculpture to do or explore the newest exhibit in a museum.

“The phones of the future might look like Oculus VR meets iPhone – without the headset,” says Forsblom. Oculus is a headset that provides virtual reality to the smartphone, but Forsblom predicts smartphones to deliver experiences that without the headset.
Advertising can no longer interrupt whatever you’re doing, or reading, but the form of an active commitment. You could use the phone as an extension of themselves, go through a car dealership. If you want something that you see, you can use gestures and movements to explore a car in detail, get more information or sign up for a test drive of a vehicle.

“In the future smartphone hardware and software will work seamlessly in harmony. Future mobile devices will be a mixture of invisible Apps for utility, entertainment, virtual reality and gaming. Mobile Virtual Reality is the ultimate input-output” device “and be creative medium” says Forsblom.
The next few years will likely see dozens of new applications, the sensors used in all kinds of crazy ways. Our smartphones more like a personal assistant, who understand our preferences, habits, our likes and dislikes to be. And virtual reality has the potential that one step further, so that we, to explore places and objects, without leaving your sofa – that’s comfort.


First Eye Tracking Virtual Reality Headset

Fove VR

It seems that every tech company needs to work on a VR or AR headset to be as innovative now. But FOVE a new VR concept and came on Kickstarter today, has a really new angle: eye tracking with sensors on the headset to the movements of your eyes, rather than the head, reduction of nausea and, in theory, increase immersion.

Yuka Kojima, a former game producer at Sony Computer Entertainment Japan, founded the startup with CTO Lochlainn Wilson, with LED interactive storytelling projects for PlayStation 3, PS Vita and move, so there is a clear line with what Sony is working on with project Morpheus. In fact, when we met with Wilson in a hotel for some London Eye on time with the new tech, he revealed that the relationship did not stop there.

“Sony is great about this stuff,” says Wilson is pleased, has face recognition researches since 2012. “You have magic Lab who’ve done all this stuff. It’s just, politically, across the board, it is killed. They are like: “That’s not proven.” But then, how to make a new technology they are very anti-innovative in some layers a chance to really innovate gives us, we learn with the technology to play ourselves and our?. own way, while Sony has used it, and they’re like, “Meh.”

This is at once quite enlightening but also not so surprising. While we say eye tracking is “new”, we are actually used before: at the Sony booth at the Game Developers Conference last year no less, was a project Morpheus premiered first. However, instead of the headset, it was next tested to controlInfamous: second son. Wilson smiles.

“I’m not sure I’d say this, but we gave demos to all Sony people all the way up the chain,” he continues, “and it was huge, huge excitement, but at the highest level she said” I could to do so with head tracking. “And I’m like,” That’s not the point, of course you can, but you go people are to make you sick, and you’re going to her neck tension and we add more interesting things.

, “You can use the eye tracking stuff with head tracking is not done, it is not, of course, now when I talk with you, I’m actually looking there -. You can not just say that in VR If this is a Oculus or Morpheus would experience the character of thinking that they are ignored. ”

So how does it work?

eye tracking virtual reality headset

FOVE is an elongated headset that combines a display with 2560×1440 VR Motion Tracking and, most importantly, eye tracking. Cameras strained under each eye tracking where you, to peek view a calibration sequence of green dots that prompts you to certain parts of the screen and stare.

The screen is 1.7x the pixel density of the Oculus Rift DK2 and even higher than HTC Vive (2.5K to 2K HTCs), albeit in 595g, the headset it comes in, is a bit of an animal.

“We got a very good feed of the eye, so we can do pupil dilation and restriction,” says Wilson. “Because we know where to look and how bright the pixel, we can formulate relationships, could we tell whether to respond emotionally or physically to a scene -. It is a bit of an inexact science at the moment, but it has potential. Of course, involved the sensitive acceleration sensors, we can technically get your heart rate up to.
“When directly in front of a character in the virtual reality means that you very coordinated and very aware of the slightest error in their interactions. And the fact that they do not know where to look, is a solid block to actually treating them as a conscious being. In one of our demos, you can make eye contact with a virtual character and the first time I have, that it’s just a feeling that she had to make alive. ”

While this all sounds great, the only full-demo that we had to explore able us shooting spaceships with our except accurate student (“It’s like a superpower,” laughs Wilson. “If we play Counter-Strike with this are ‘d Be very quickly “prohibit the FOVE a campaign’ ‘) It works well -. bizarre if you are used to VR, it actually takes to acclimate more how you are going to look quickly used his head – tracking you at 60fps with very little latency. It’s definitely a compelling USP in the PR game, even if we want the demo was not so easy.

A second, semi-finished study is more of a Shadowgun style sci-fi first person shooter that shows us another cool trick of the eye tracking: nice to your processors. How can tell you where to look for it, the FOVE scenes in high fidelity can do for your eyes only, which other areas, so that reducing the demands on the hardware (and actually increased in a slightly blurred, lower fidelity, real-time depth of field simultaneously). It is currently in a viable, though not particularly polished state, but has potential to flagship smartphones pull much better quality VR.

“We talk a PlayStation 3 PlayStation jump 4 levels of performance,” says Wilson. “A six-fold increase in processing power to do advanced rendering. Mobile VR, AR and hybrid itself is the real revolution, and I hope that our technology at the core, be that. If you put a Samsung-speed VR at which only interaction you have, typing on the page is what you can in order to make tap, head movement, sure.? – exhausting, but you put this on, and you can interact with your eyes – .. that is very powerful ”

OK, how can I get one?

Well, for $ 350 (£ 225) you delivered early and nab a development version of May next year. A team of 60 working on FOVE hacker space in Japan, run like hell with a 3D printer in true style valve. However, our only word of warning that full consumer versions of both the Oculus Rift and Sony’s project Morpheus to land about the same time, while HTC and valve still expect the Vive is from this year. Could it be too late for a party already overpopulated with the rather large wallet?
FOVE, to his credit, tried to play with every beautiful. The SDK will support all open VR sources of Unity, to engine and Unreal, OS VR Cry to VR-steam (Oculus of the closed system is officially off limits, though). It is also with the aim of that headset in 595g size down, the design shrinks considerably and adding a thermal vent so never fogged up. The team has also done a lot of work with ALS patients in Japan and offers a private room for her, only to control her eyes.

But realistically, how the meeting with PlayStation testify like a FOVE technology feels to play and a company waits to get tech from bought his, a team with a great idea that they want to notice others. Wilson is certainly philosophically in the future.

“It would have quite successful go consumers for us, because that would require us raise to go some significant capital, against Facebook’s. It’s tough, tough hardware and a lot of work, things have gotten out the door. If we have a not to make content ecosystem or not super-excited people get from eye tracking, we would try to get a license the technology. ”

Went live just a few hours after the Kickstarter, FOVE had raised $ 105,000 of its $ 250,000 goal. Maybe eye tracking will capture the hearts of men as well as their heads.


How Realistic is the Hacking in TV Shows?

A group of five high school girls dressed impeccably almost murdered dozens of times by the same mysterious stalker and the police in their idyllic small town is either corrupt or incompetent to worry about. As the girls fight back? Hacking, of course. At least, that’s kind of like they do it on Pretty Little Liars. “Hacking” is the deus ex machina in many scenarios on Pretty Little Liars and other mainstream programs, so that people can easily follow, harass, stalk and defend one another 30 to 60 minutes at a time.
But how real is it? To determine the feasibility of the hacks on shows like Pretty Little Liars itself, Sherlock, scandal, arrow, CSI: Cyber ​​and agents of SHIELD, I spoke with Patrick Nielsen, senior security researcher at Kaspersky Lab.
“One of the interesting things about security is that a lot of what you see on TV is actually not that far from the truth, the real hacking is not nearly as colorful, but the result is usually closer to realistic possibility as an absurd fiction, “says Nielsen.
Nielsen suggests that many seemingly absurd functions of the technology on the TV is not correct, per se, but they often ahead of their time.
“We use computers in more things every day, from critical infrastructure to fitness bands, and they all run software – software vulnerable – and we’re not putting almost as much effort to make sure the software,” he says. “So we can about how ridiculous laugh heels UIs or ‘two hands’ hacking scenes from NCIS are, but the threats are real. ”
On that ominous note, let us once again deceive ourselves about the problems of real life with TV.

Pretty Little Liars ABC Family

Pretty Little Liars, ABC Family

In “Welcome to the Dollhouse” (Season 5, Episode 26), four young women a serious offense ride in the back of a police car convicted. They are talking and hugging – there is no one whose supervision, of course – when suddenly, bam. The van departs violently and crashes to a halt. As it turns out, creep, which has stalking and torture these girls for years could hack into the onboard computer and the remote control of the car.
“The biggest problem is that most cars (achievable) computer in them, but also the smartest of smart not so far, have the computer to go completely control the car – yet,” says Nielsen. “As a rule, an attacker information that will cause the car to send the brakes, but they would not steer able to. So now remote control of a vehicle is unrealistic.”
Later discuss in this episode, a group of young men, friends of the girl, what happened and what they plan to do about it. One of the boys is a 18-year-old prodigy technology. It’s fast hacking, but he is very concerned about the girls, as well as his friends: a rookie cop and a high school English teacher. You call that stalker “A.” Here is the transcript of this scene:
Caleb came in the PD command center.

And if you do. Cross-check of the transporter GPS system with the PD system, it’s dead here

On Route 30 near the railroad crossing.

That is, if A hacked into the computer system van and took remote control of the vehicle.

OK, so A would be required to be in the area, have to keep the van on the road.

The transfer would have given a clear view and also by the deputies.

Is there traffic cameras in the area, you can hack?

I’m one step ahead of you. I am backing up this shoot now.
Nielsen says that a big problem with this scenario is that the guys flying blind. You are not hardware specific location, the know often the most difficult part of a successful hack.
“Even though the penetration into the type of computer that uses a certain police station or a particular CCTV camera is simple, you still need to find the right target,” he says. “It can be much more difficult than the hack, then, the development of a specific camera based on their position in real time is also quite unrealistic, at least on the Internet -. If she were in the physical proximity of the camera, it would be easier, but then they would not need the camera. ”
KASPERSKY CONCLUSION: Mostly unrealistic
My verdict: Never trust a hacker that something as unnecessary, says: “GPS system.”

Sherlock BBC

Sherlock, BBC

This includes a similar technological overload that occurs in two separate episodes: “The Reichenbach Fall” (Season 2, Episode 3) and “His Last vows” (Season 3, Episode 3).
In “The Reichenbach Falls,” it is today’s London, and Sherlock Holmes in a cabin after a hard day of hunting for clues. An ad playing on the TV in front of his seat, and he asked the driver to turn it off. Instead, the display cuts out and is a video of Holmes’ archenemy, Moriarty Jim replaced. The video is only for Holmes, and it’s only playing in his taxi. The twist: After Holmes leaves the car in a daze, he sees that the driver Moriarty.
“The interesting question here is not whether compromises display of a taxi is possible – it is – but how Moriarty knew not only the cabin Sherlock was, but how can he compromised to find the taxi on any network,” says Nielsen.
Jump to “Las His Vow” Season 3 finale, and someone – possibly Moriarty – is cut into a position in each TV channel in the UK at the same time. The nation watched in shock as mocking, shocking video plays on loop without interruption. Two government officials speak horrified:
How is this possible?
We do not know. It is on every screen in the country, each screen simultaneously.
“As for compromise TV channels, sure it’s possible, but nothing really stops people to speak very quickly to the TV stations from turning off the compromised feeds so that an attacker would have in a real scenario,” says Nielsen.
KASPERSKY CONCLUSION: Mostly unrealistic
My verdict: Cab displays can be affected in any case; good to know.

Scandal ABC

Scandal, ABC

Scandal focused on Washington, DC Top political communications expert, Olivia Pope, and in the first episode of Season 4: “Randy, Red, Super Freak and Juliet”, we do not see them lounging in luxury on an island so far away, it is displayed on any map. A boat with supplies arrive, including five bottles a rare and coveted wine. Along with the wine, Pope gets asked a letter to return home. Later, it is revealed that a colleague, an amateur but talented hacker, Pope found by tracking shipments of wine – something they can not live without – all over the world.
“International shipments must specify content and its value for the individual purpose, and this information is stored in databases, it is not unrealistic at all here present someone with a laptop’s internal network, a shipping company and look forward to all ‘Wine’ programs in which the value is very high, and the search for the location / shipping so, “says Nielsen. “The hardest part would be to get the note in the package, but is also possible with some social engineering. My question is how would a shipping company, you’ll find an island that is not on a map?”
My verdict: These are bad times; Times when I did not even dare to keep to my secrets.

Agents of SHIELD ABC

Agents of SHIELD, ABC

The Avengers offshoot has a ton of futuristic and alien technology, so as to help warm Nielsen, we have a line of Season 1, Episode 4, “Eye Spy.”
In this episode, the well-educated, super-intelligent agents of SHIELD are hundreds of photos of the same people, scanning, moving from photo-detection software scan multiple online sources.
It’s amazing. Each year, this part of our work is easier. Between Facebook, Instagram and Flickr surveilling the people themselves.
Nielsen calls this line “moving.”
“We’ve all seen the technology that tells us we came to one of our Friends Photos [ie tagging] and ‘we want for our timeline?’ There is no technical reason why the same technology can not be used to a certain person in all the photographs, the company, or why an attacker who has compromised the company can not. “To find
The majority of this particular Agents of SHIELD episode focuses on a woman with a high-tech camera implanted in her eye. Skye, SHIELD, the go-to expert hacker, looking for this camera transmitting source – they do not know that it is an in-eye system only – it successfully reverse-engineers.
I think I can recover the data signing of the encrypted broadcast. I do not understand it yet, but that’s how they watch us. Give me an hour. Maybe we can just start back.
“I mean, they use real words, but it is not clear what the real value would be,” says Nielsen. “It may be that they found a reference to the origin of the shipment, and that was enough to determine network / IP address of the attacker, they compromised then exactly.”
Nielsen does not see a problem with the distance in mind the safety of Skye’s eye camera; it is quite plausible. It is the lead time on the actual hacker, however, that he finds problematic.
“What makes this unrealistic (based on the description), as the SHIELD agents discover a completely new technology, and then figure out how to compromise it in a few seconds or minutes. Actually, this is a very long and tedious process. real attacks are incredibly fast, as a rule, with fancy animations or no windows appear on the screen, but on the basis of scripts and programs, vulnerabilities, lasted for the months or years, and analyze to use. ”
This hack reminds Nielsen A recent paper on side-channel attacks on encryption.
“A few years ago, Adi Shamir, a famous cryptographer, and his team published a paper showing how you could extract [a] encryption key from a computer simply by listening to them. In February this year, they showed how you could do it with a radio by detecting the electromagnetic emanations from a computer computer information leaks everywhere -. noise, electromagnetic waves, heat -. and it all means something clever attacker can extrapolate all kinds of information from this “.
KASPERSKY CONCLUSION: Mostly unrealistic
My verdict: In-eye cameras are not really that far away – at this rate, they are probably closer than from Facebook Oculus Rift

Arrow The CW

Arrow, The CW

In “Home Invasion” (Season 1, Episode 20), we give a warehouse-turned-bunker with exercise equipment and high-tech gadgets lined. Felicity Smoak, hacker extraordinaire secret vigilante Oliver Queen, is situated at an online fact-finding mission. She hacks into ARGUS, a government organization, and ends lurking in their systems for days, even weeks at a time.
I thought it would be helpful to track ARGUS “chasing Deadshot, so I deciphered their communication protocols again. That said, I’ve just hacked a federal agency. What kinda makes me a cyber, what is bad, because I can not me imagine, fits in well in Guantanamo Bay.
Nielsen says that this is a largely checked. “Compromise a company and steal information from their databases, whether they are reports, customer data, or anything else that is on the agenda, and we often attacks on for months or years away before they were discovered.”
Later Smoak shows their computer skills again with the following description:
I had a remote access Trojan scour the internet for Edward Rasmus. His name appeared just on a passenger list, 08.15 clock to Shanghai.
This is a little more complex, says Nielsen.
“Be a Trojan that ‘scours the Internet” for someone or something, something that we have seen in some advanced malware like Stuxnet, which did very little, but spread when she got accepted access to a particular type of control system that used by Iran’s nuclear reactors. The difference between the reality of the nation-state attacks and TV is that the nation-states to have to spend a lot of time, the weak points and ads they wish to use and shoot. There is no such thing as a Trojan, which simply infiltrated everything, including flight booking systems, unless it was designed to do this. ”
My verdict: The unrealistic aspect of the arrow is all sculpted abs ridiculous.

CSI: Cyber CBS

CSI: Cyber, CBS

Ah, the main vein. In CSI: Cyber, Special Agent Ryan Avery and her team to hunt down criminals, but in “Fire Code” (Season 1, Episode 4), the damage is all in real life. Someone has figured out how to start house fires from a distance. Ryan and Dawson (sorry!) Agents Elijah Mundo chasing one of their criminal informants, a hacker, who a USB stick with your hands “a hot new piece of code.” Back at the lab, Agents, and white-hat hackers Daniel Krumitz admired the USB drive before going to work.
The connection is secure. Simply plug in the flash drive.
He puts it in.
“Plugging in a flash drive is actually very risky, no matter whether you are online or not,” says Nielsen. “On a lot of computers, it can give an attacker full access to the entire system, not just by frying software in your operating system, but by reading memory on the hardware level, under the operating system. You can also . I do not want to connect your computer to a USB stick is not only the trust because someone says. “It’s ok”
Continuation of the scene when the data is loaded on the flash drive, Krumitz proposes a button and the printer starts to try to print something – and it caught fire immediately.
Your CI gave us code in a firmware update that the thermal switch triggers a printer hidden. The switch regulates the temperature of an ink fixing unit, keep it from overheating. Well, if the paper loads, touching the blazing-hot fusing it sparks and ignites the paper in the tray, making it catch fire.

Thus, the fuser is the game, and the paper is the fuel.

Code was sent from a computer all this?

It’s pretty amazing, is not it?
This description is not realistic, says Nielsen. Maybe a little too realistic.
“I lost. … Am I the protocol of the CSI episode, or research paper?” he asks. “I would certainly give CSI: ‘. Writing a GUI interface using Visual Basic to track the IP address of the murderer’ Cyber ​​Pros support the script from the real research, and not just”
My verdict: Plugging in an unchecked, random flash drive is silly, so remember to always guard (no, we’re not talking about a Trojan).

These scenarios are of course a small selection of wild and wacky ways Hollywood portrays technology in television. But all in all, it seems, even the craziest ideas are not too far off the mark. There is still a limitation, however, and that is price.
“For all the stops, which I marked realistic, the cost in time and knowledge. But money can speed up the process,” he says. “It takes time to learn how to do these things, and the research necessary to do to affect certain systems, especially if you have a type of device that hard to get your hands as a particular type of CCTV compromises need. could carry out a person with a laptop all attacks, especially with exploits that have written other people, but it would be more than a large, well-funded group of people are the same. The attacks, which I referred to as are to take unrealistic possible if you are nation-state level attacks, ie the way in which money and other resources are not an issue. “


Developer Edition of Sony Smartglasses Released

Sony has officially released its latest augmented reality glasses in 10 countries around the world. A few months ago, the portable was announced by the company, and now is the development version has the transactions made by March 28. The Sony SmartEyeGlass Developer Edition, SED-E1 is Sony’s answer to other smart glasses on the market. The glasses have a separate controller in addition to the primary AR glasses and for $ 840 for sale. The glasses from Sony are quite different from Google goggles. Apart from the US and the UK, they will now be in other countries.

The SmartEyeGlass

The SmartEyeGlass has a binocular display for actual and augmented reality. They are powered by hologram-optic technology which provides the glasses with a bright display and high transparency with electricity. The hologram technology images, symbols and texts about the natural line of sight of the carrier are superimposed. Voice messages or navigation instructions can also be superimposed by Sony SmartEyeGlass.
With a thickness of only 3.0 mm, it can easily be worn without discomfort. With a variety of applications, which are compatible with the AR display the smart glasses, it becomes easier for carriers to obtain additional information and simultaneously advanced applications as always information on managing inventory obtain help on a car and much repairs will find more.

Availability of equipment

Now the device is to be for sale in the different countries, Japan, Italy, France, Belgium, Germany, Spain, the Netherlands, Great Britain, Sweden and the USA are available. It will be different price tags in these countries. In Italy, France, Germany, Netherlands, Belgium and Spain, the device for 670 euros can be bought. In Japan, it will cost ¥ 100,000, while in Sweden you have to spend 5810 SEK. If you live in the UK, you can get the Sony SmartEyeGlass Covering 529 GBP and be willing to spend 840 USD, if you are a resident of the United States of America.
While in the UK, USA, Germany and Japan, the device can be purchased with any of consumers in other countries for business customers are entitled to buy this product.
So if you want to buy the smart glasses from Sony, you can do this, depending on your country of residence and the amount you are willing to spend.


Extreme Virtual Gaming with Virtuix Omni Treadmill

The Virtuix Omni treadmill prototype has been displayed in various exhibitions since mid-2013, but in the first few years it had its own borders. Finally, it is now ready for publication and the final design was shown at CES this year. The treadmill lets you play virtual reality games while walking on it. You get to enter an immersive world and can meet your favorite superhero in a game. The virtual reality system, you can run around in the virtual environment. If you run or walk in a virtual world, you will be able to get the ultimate immersive experience that would not be possible with seat.

A Virtual Reality Gaming Platform to look forward to

The Virtuix Omni is a compact and cost-VR gaming platform that can be easily assembled. It is one of its kind. A concave platform is designed for a smooth and natural movement when the device is used in motion. There are no moving parts, it is possible to provide a perfect experience.
It consists of a metal frame and consists of three main parts. A special pair of sneakers, the plastic tracking units at the top and smooth plastic back on the bottom has from the first part. You need to sit down and stand on the treadmill, which is the second part. The treadmill is also slippery and is a circular pit and black in color. This cage reminds us of the virtuality sleeves of the 90s. The third part consists of a strap that is attached to your waist and legs and metal rods that it will hold firmly to the top of the cage. At the same time, do a Oculus Rift headset and you are all set to a virtual reality gaming world.
At the beginning of a Kinect was used to track the movement of the feet and interpret it controls in the game. Later sensors were embedded in the base for this. But now there are hard drives in their shoes, which are responsible for this task.
Adi Robertson of The Verge writes about his experiences to try this device. He says: “The technology seems more polished than it was at least year’s CES, but my experience was similar to Ellis Hamburger in 2014. The main difference is that I no sickness not feel at all, despite playing a fast paced VR shooting demo, . My stomach churned would have sitting If you prefer, you can use even without the Rift; Virtuix January Goetgeluk founder says some people have pre-ordered it only as a way to practice playing “.
Dean Takahashi of VentureBeat-writes “I saw the production version of the Omni treadmill for the first time at the 2015 International CES, the big tech show in Las Vegas. Virtual reality was one of his major themes. The Omni works with a variety of headsets, such as Samsung gearbox or the Oculus Rift, and it is one of many devices on getting gamers off the couch and active targeting. ”

Move freely in the virtual world

The device uses the user’s entire body and changes the gameplay experience. You will sit in a position to move from side to side, walk, run, or even jump into 360 degrees. You are free to move to the location in the virtual environment without any restrictions. Dive into virtual quests, enjoy the game and still stay in shape with all the activities on the treadmill.


Get Ready for 10TB SSDs New Technology


SSDs and other flash memory devices will soon be cheaper and more, thanks to the big announcements from Toshiba and Intel. The two companies unveiled new “3D NAND” memory chips, which are stacked in layers to pack in more data, as opposed to one-level chips currently used. Toshiba, said it created the world’s first 48-layer NAND whereby a chip with 16 GB increased speed and reliability. The Japanese company invented flash memory in the first place and has the smallest NAND cell in the world at 15 nm. Toshiba is now manufacturers engineering samples, but products with the new chips are not for another year or so ago.

At the same time, Intel and Micron partner she showed now producing their own 32-layer NAND chips, which should also, in SSDs in about a year. You are sampling even greater capacity as NAND memory Toshiba, with 32 GB chips available and a 48GB version will follow shortly. Micron, the chips are said used up to 3.5 TB to make rubber stick size PCIe SSDs M.2 tall and 2.5-inch SSDs with 10 TB capacity – with the latest hard drives at eye level. All this means that Toshiba, Intel / Micron and companies who are giving their chips shortly little more competition to Samsung, the using 3D has NAND Tech for much longer. Higher capacity, cheaper SSDs that make the spinning hard drives to sleep with one eye open is: The result will be nothing but good for the consumer.


Facebook’s Aquila Drone Will Beam Internet With Lasers

As the second day of its F8 conference began here at Fort Mason in San Francisco, Facebook announced the first hardware it plans to use to beam the Internet down to billions of people around the world.

Code named Aquila, the drone has a wingspan comparable to a Boeing 767 yet uses lightweight materials that allow it to weigh less than a car.

During today’s keynote, Facebook also announced that it open sourced its development tool React Native, and showed off new artificial intelligence systems that can identify and understand the meaning of video and text content.

Aquila has to be incredibly light, because it’s going to be kept aloft for as long as three months at a time using solar power. Just staying in the air for that long is a challenge, but Facebook’s also going to be pushing Internet access down to people 60,000-90,000 feet below using lasers, as well as maintaining communications between drones to maintain coverage across wider regions.

Aquila is the first complete concept we’ve seen come out of Facebook’s acqui-hires of engineers from UK-based Ascenta, unveiled nearly a year ago today. Facebook says it’ll begin test flights this summer, with a broader rollout over the next several years.


Biggest Challenges In Virtual Reality, According To Oculus

The second half of Facebook’s keynote took an odd turn today at F8, spending a surprising amount of time looking at the unknowns of consciousness and visual illusions. But toward the end, Oculus Chief Scientist Michael Abrash spent a few minutes talking about the biggest opportunities for improvement in the virtual-reality experience: haptics, visuals, audio and tracking.


While previous versions of the Oculus Rift have required plugging in a separate set of headphones to get the “I’m encapsulated in a virtual space” feeling, the Crescent Bay prototype has its own pair built-in, and the company has said the consumer model will, as well. The Crescent Bay model I played with at CES earlier this year didn’t include features like noise cancellation, so there’s still obvious areas to improve on that aspect of the integrated experience over time.


On the tracking front, the Crescent Bay demo proved that Oculus can provide nigh-perfect camera-based tracking when you limit the space you can move around in to a four-foot by four-foot square. If you look outside the Oculus ecosystem to the HTC/Valve Vive headset, the technology is already out there to allow for a full-on Holodeck-style “I can walk around in this space” experience that gives you as much as a living room’s worth of space.


The visuals issue is a little trickier. There are a few main ways Oculus can improve the graphics in the Oculus Rift headset as it exists today. The “easiest” method is to simply improve the hardware on the PC side of the equation: If you throw a better graphics card in the desktop powering your headset, you can get more dynamic shadows, more polygons and prettier textures in your games at the same screen resolution without a drop in performance.


That’s basically what Oculus has been doing with each of its public demos. Every time I’ve gone in for one, I’ve been able to get the engineer running the session to brag about the GPU they’re using (at CES it was a ~$600 Geforce GTX 980). But in the long run Oculus is going to improve the optics and display in the headset itself.


Even in the impressive Crescent Bay prototype, you’ve only got a 90-degree field of vision, while your eyes are normally capturing 280 degrees. To remove this “binocular” effect (and any visible pixels in your vision), you’d need better lenses and a display with a resolution higher than even Apple’s new Retina 5K iMac, which starts at $2,499. It’s not hard to imagine Oculus getting closer to that ideal spec over the next several years, but it’ll happen with gradual iteration and improvement (and will require early adopters to keep buying high-end rigs for development and gaming).


The haptic side of things is perhaps the most difficult to address. Oculus has yet to demonstrate an “official” control mechanism for the Rift, instead letting each developer use the peripherals they find most suitable. During his keynote, however, Abrash noted that “[he expects] hands to be as capable in VR as they are in the real world” in just a few years. That could be a hint at something along the lines of Leap Motion’s hand-tracking solution for OSVR, which embeds a camera at the front of your head-mounted display.


However, the slide Abrash showed during the keynote specifically noted haptics, which means Oculus is thinking not only about tracking hands to interact with virtual worlds, but having the world interact back — letting you feel the things you reach out to touch. It’s one thing to expect people to strap on a headset for any amount of time, but gloves and other hardware solutions to VR haptics require extra bulk and maybe even charging another set of peripherals. But if it works, haptics in your VR controllers could make experiences far more immersive than even the best projects we’ve seen to date.


It’s a big challenge that’s easy to mess up, so it’ll be interesting to see whether Oculus can figure out something consumers will buy into later this year, or if it’s a longer-term issue to be addressed in future iterations