Created by Chinese smartphone company Nubia (which is partially owned by ZTE), the Nubia X solves the problem of where to put the selfie cam on an all-screen phone by dodging the question entirely. That’s because instead of using the main 6.1-inch LCD screen and a front-facing camera to take selfies, you can simply flip the phone around and use its rear camera and 5.1-inch secondary 1520 x 720 OLED screen on the back to frame up your shot.
This solution might sound like overkill, but in some ways, it’s a much simpler overall design. Cameras are quickly becoming much more difficult and expensive to make than screens, and by only including one module on the back, it gives phone makers the ability to focus more on delivering a single, high quality photography experience.
On top of that, with the prevalence of so many phones designed with glass panels in front and back, the Nubia X shouldn’t be much more fragile than a typical handset. Also, that extra display can be used for way more than just selfies. Nubia says its rear, always-on display can show off your favorite art or be used as clock, or it can double as a full-on second display with access to all your standard Android screens and apps.
Now, the back of your phone doesn’t need to be reserved for blank glass.
Image: Nubia
Inside, the Nubia X’s specs look pretty solid as well—featuring a Qualcomm Snapdragon 845 chip, 6GB/8GB of RAM, up to 128GB of storage, and a sizable 3,800 mAh battery. And because there’s no room in front or back for a traditional fingerprint sensor, Nubia opted for an in-screen fingerprint reader like we’ve seen on the OnePlus 6T and Huawei Mate 20.
Deep learning has a DRAM problem. Systems designed to do difficult things in real time, such as telling a cat from a kid in a car’s backup camera video stream, are continuously shuttling the data that makes up the neural network’s guts from memory to the processor.
The problem, according to startup Flex Logix, isn’t a lack of storage for that data; it’s a lack of bandwidth between the processor and memory. Some systems need four or even eight DRAM chips to sling the 100s of gigabits to the processor, which adds a lot of space and consumes considerable power. Flex Logix says that the interconnect technology and tile-based architecture it developed for reconfigurable chips will lead to AI systems that need the bandwidth of only a single DRAM chip and consume one-tenth the power.
[…]
In developing the original technology for FPGAs, Wang noted that these chips were about 80 percent interconnect by area, and so he sought an architecture that would cut that area down and allow for more logic. He and his colleagues at UCLA adapted a kind of telecommunications architecture called a folded-Beneš network to do the job. This allowed for an FPGA architecture that looks like a bunch of tiles of logic and SRAM.
Image: Flex LogixFlex Logix says spreading SRAM throughout the chip speeds up computation and lowers power.
Distributing the SRAM in this specialized interconnect scheme winds up having a big impact on deep learning’s DRAM bandwidth problem, says Tate. “We’re displacing DRAM bandwidth with SRAM on the chip,” he says.
[…]
True apples-to-apples comparisons in deep learning are hard to come by. But Flex Logix’s analysis comparing a simulated 6 x 6-tile NMAX512 array with one DRAM chip against an Nvidia Tesla T4 with eight DRAMs showed the new architecture identifying 4,600 images per second versus Nvidia’s 3,920. The same size NMAX array hit 22 trillion operations per second on a real-time video processing test called YOLOv3 using one-tenth the DRAM bandwidth of other systems.
The designs for the first NMAX chips will be sent to the foundry for manufacture in the second half of 2019, says Tate.
In the future, you might talk to an AI to cross borders in the European Union. The EU and Hungary’s National Police will run a six-month pilot project, iBorderCtrl, that will help screen travelers in Hungary, Greece and Latvia. The system will have you upload photos of your passport, visa and proof of funds, and then use a webcam to answer basic questions from a personalized AI border agent. The virtual officer will use AI to detect the facial microexpressions that can reveal when someone is lying. At the border, human agents will use that info to determine what to do next — if there are signs of lying or a photo mismatch, they’ll perform a more stringent check.
The real guards will use handhelds to automatically double-check documents and photos for these riskier visitors (including images from past crossings), and they’ll only take over once these travelers have gone through biometric verification (including face matching, fingerprinting and palm vein scans) and a re-evaluation of their risk levels. Anyone who passed the pre-border test, meanwhile, will skip all but a basic re-evaluation and having to present a QR code.
The pilot won’t start with live tests. Instead, it’ll begin with lab tests and will move on to “realistic conditions” along the borders. And there’s a good reason for this: the technology is very much experimental. iBorderCtrl was just 76 percent accurate in early testing, and the team only expects to improve that to 85 percent. There are no plans to prevent people from crossing the border if they fail the initial AI screening.
Most people would appreciate a chatbot that offers sympathetic or empathetic responses, according to a team of researchers, but they added that reaction may rely on how comfortable the person is with the idea of a feeling machine.
In a study, the researchers reported that people preferred receiving sympathetic and empathetic responses from a chatbot—a machine programmed to simulate a conversation—than receiving a response from a machine without emotions, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects and co-director of the Media Effects Research Laboratory. People express sympathy when they feel compassion for a person, whereas they express empathy when they are actually feeling the same emotions of the other person, said Sundar.
[…]
However, chatbots may become too personal for some people, said Bingjie Liu, a doctoral candidate in mass communications, who worked with Sundar on the study. She said that study participants who were leery of conscious machines indicated they were impressed by the chatbots that were programmed to deliver statements of sympathy and empathy.
“The majority of people in our sample did not really believe in machine emotion, so, in our interpretation, they took those expressions of empathy and sympathy as courtesies,” said Liu. “When we looked at people who have different beliefs, however, we found that people who think it’s possible that machines could have emotions had negative reactions to these expressions of sympathy and empathy from the chatbots.”