From Engadget: OUYA, XBMC sitting in a tree, media s-h-a-r-i-n-g (update: TuneIn, new pics)

OUYA and XBMC sitting in a tree, media sharing

OUYA’s slew of collaborations isn’t letting up, even with less than two days to go before its fundraising round is over. The XBMC team has just pledged that its upcoming Android app will be tailored to work with the upcoming console. While the exact customizations aren’t part of the initial details, the media center app developers will have early access to prototypes of the OUYA hardware. There’s suggestions that there won’t be much of a wait for the Android port of XBMC, whether or not you’re buying the cuboid system — XBMC’s developers note that Android work should be merged into the master path once “final sign-offs” are underway. All told, though, the OUYA is quickly shaping up into as much of a go-to media hub as it is a game system.

Update: OUYA itself has also posted word that TuneIn’s radio streaming is also on its way. And just to top off its efforts, the company has posted rendered images that better show the scale of the console: our Joystiq compatriots note that it’s really a “baby GameCube” in size, and its gamepad looks gigantic by comparison.

 

 

from Engadget

From AnandTech: ARM Announces 8-core 2nd Gen Mali-T600 GPUs

In our discrete GPU reviews for the desktop we’ve often noticed the tradeoff between graphics and compute performance in GPU architectures. Generally speaking, when a GPU is designed for compute it tends to sacrifice graphics performance or vice versa. You can pursue both at the same time, but within a given die size the goals of good graphics and compute performance are usually at odds with one another.

Mobile GPUs aren’t immune to making this tradeoff. As mobile devices become the computing platform of choice for many, the same difficult decisions about balancing GPU compute and graphics performance must be made.

ARM announced its strategy to dealing with the graphics/compute split earlier this year. In short, create two separate GPU lines: one in pursuit of great graphics performance, and one optimized for graphics and compute.

Today all of ARM’s shipping GPUs fall on the blue, graphics trend line in the image above. The Mali-400 is the well known example, but the forthcoming Mali-450 (8-core Mali-400 with slight improvements to IPC) is also a graphics focused part.

The next-generation ARM GPU architecture, codenamed Midgard but productized as the Mali-T600 series will have members optimized for graphics performance as well as high-end graphics/GPU compute performance.

The split looks like this:

The Mali-T600 series is ARM’s first unified shader architecture. The parts on the left fall under the graphics roadmap, while the parts on the right are optimized for graphics and GPU compute. To make things even more confusing, the top part in each is actually a second generation T600 GPU, announced today.

What does the second generation of T600 give you? Higher IPC and higher clock speeds in the same die area thanks to some reworking of the architecture and support for ASTC (an optional OpenGL ES texture compression spec we talked about earlier today).

Both the T628 and T678 are eight-core parts, the primary difference between the two (and between graphics/GPU compute optimized ARM GPUs in general) is the composition of each shader core. The T628 features two ALUs, a LSU and texture unit per shader, while the T658 doubles up the ALUs per core.

Long term you can expect high end smartphones to integrate cores from the graphics & compute optimized roadmap, while the mainstream and lower end smartphones wll pick from the graphics-only roadmap. All of this sounds good on paper, however there’s still the fact that we’re talking about the second generation of Mali-T600 GPUs before the first generation has even shipped. We will see the first gen Mali-T600 parts before the end of the year, but there’s still a lot of room for improvement in the way mobile GPUs and SoCs are launched…

from AnandTech

From Popular Science – New Technology, Science News, The Future Now: Physicists Demonstrate Working Quantum Router, a Step Toward a Quantum Internet

Quantum Computer Chip Wikimedia Commons

As much as we love our silicon semiconductors, quantum computers are very much a technology of the future. Instead of the usual string of 1s and 0s, they’ll be able to send both types of information at the same time, dwarfing their traditional counterparts. But one major problem is that they can only move through one optical fibre. To push more information through, they need a router, and Chinese physicists have unveiled the first one.

In a quantum computer, photons ferry information to other sources. It’s possible to send the photons directly through one fibre, but routing comes in when another fibre is needed. Like the router you probably own, a control signal reads the data then sends it to its destination. But dealing with unruly quantum particles is a little more complicated; when a signal is read it’s also destroyed. So even though the data can be transferred with traditional methods, that doesn’t offer the kind of data-transferring power quantum computing offers.

This new quantum router proves it’s possible to truly guide a quantum signal. The information used is encoded in two different types of polarized photons (like 1s and 0s). Scientists then create a single photon that acts as both (the combined 1s and 0s). That photon is then broken down into two photons that share the combined state. The router picks up one to determine the route, then the other photon is used to transfer the information. A simple series of half mirrors guides the photons along the correct route.

Does this mean we’re now well on our way to a globally connected, super-fast stream of information? No. The scientists say it’s just a proof of concept–we know it’s at least theoretically possible to send quantum information through a router, but it’s still a limited way of doing it. In other words, when this sort of technology is usable (and it will be), it won’t look like this.

[Technology Review via Gizmodo]

 

from Popular Science – New Technology, Science News, The Future Now

From Engadget: ARM claims new GPU has desktop-class brains, requests OpenCL certificate to prove it

ARM claims new GPU has desktopclass brains, requests OpenCL certificate to prove it

It’s been a while since ARM announced its next generation of Mali GPUs, the T604 and T658, but in the semiconductor business silence should never be confused with inactivity. Behind the scenes, the chip designers have been working with Khronos — that great keeper of open standards — to ensure the new graphics processors are fully compliant with OpenCL and are therefore able to use their silicon for general compute tasks (AR, photo manipulation, video rendering etc.) as well as for producing pretty visuals.

Importantly, ARM isn’t settling for the Embedded Profile version of OpenCL that has been “relaxed” for mobile devices, but is instead aiming for the same Full Profile OpenCL 1.1 found in compliant laptop and desktop GPUs. A tall order for a low-power processor, perhaps, but we have a strong feeling that Khronos’s certification is just a formality at this point, and that today’s news is a harbinger of real, commercial T6xx-powered devices coming before the end of the year. Even the souped-up Mali 400 in the European Galaxy S III can only reign for so long.

 

from Engadget

From Engadget: Kinect Toolbox update turns hand gestures into mouse input, physical contact into distant memory

Kinect Toolbox update turns our frantic gestures into mouse input

Using Microsoft’s Kinect to replace a mouse is often considered the Holy Grail of developers; there have been hacks and other tricks to get it working well before Kinect for Windows was even an option. A lead Technical Evangelist for Microsoft in France, David Catuhe, has just provided a less makeshift approach. The 1.2 update to his Kinect Toolbox side project introduces hooks to control the mouse outright, including ‘magnetic’ control to draw the mouse from its original position. To help keep the newly fashioned input (among other gestures) under control, Catuhe has also taken advantage of the SDK 1.5 release to check that the would-be hand-waver is sitting and staring at the Kinect before accepting any input. The open-source Windows software is available to grab for experimentation today, so if you think hands-free belongs as much on the PC desktop as in a car, you now have a ready-made way to make the dream a reality… at least, until you have to type.

 

from Engadget

From AnandTech: The 16GB Nexus 7: Storage Performance

For the first time, in our Nexus 7 review, I started seriously looking at integrated storage performance of tablets and smartphones. I’ve casually done this in the past, but users complaining of poor system responsiveness with background writes on ASUS’ Transformer Prime/Pad series demanded something a little more thorough.

As I mentioned in our Nexus 7 review, most tablet and smartphone makers integrate a single chip controller + NAND combo to save on cost and space. In the case of the 8GB Nexus 7, you get an 8GB eMMC package from Kingston. In this tiny package is an eMMC controller and NAND die. The component list should sound familiar to anyone who remembers the earliest affordable MLC SSDs for PCs, particularly in the absense of any on-board DRAM for caching duties. The lack of DRAM is only part of the issue, the fact of the matter is these cheap eMMC controllers just aren’t very fast – at least compared to high-end SSD controllers. Things will get better over time, but for now cost is still a major concern.

The Kingston controller in the 8GB Nexus 7 is much faster than what ASUS uses in the Transformer Prime/Pad series, but I had heard the controller in the 16GB models was even quicker. I just got my hands on a 16GB N7 and ran through the Android version of our standard four-corners SSD tests using Androbench. Just like last time I increased read/write sizes to 100MB in order to get consistent results out of the device.

Sequential Read (256KB) Performance

Sequential read speed is around 14% slower on the 16GB part, but it’s still higher than what you’ll get out of a Transformer Pad Infinity. The drop here is unfortunate as sequential read performance does matter – that’s really the only downside to the 16GB model’s IO performance though. The drop is also not significant enough to cause any additional stuttering or otherwise undesirable behavior.

Sequential Write (256KB) Performance

Sequential write speed is up by 24%, putting the Nexus 7 further ahead of the other devices I tested here.

Random Read (4KB) Performance

Random read performance shoots up by over 60%, putting the 16GB Nexus 7 ahead of the Galaxy Nexus.

Random Write (4KB) Performance

Random write performance sees a 43% increase, putting good distance between the 16GB and 8GB N7s. None of these numbers are particularly good (we’re still talking about mechanical hard drive levels of performance here) but it’s definitely a step in the right direction.

It’s always possible that we’ll see multiple controllers used in the 8 and 16GB Nexus 7s, but for now all of the 16GB models use the same controller. The difference in IO performance isn’t significant enough to push you towards the $250 Nexus 7 if you don’t need the extra space, but consider it an added benefit if you do order the 16GB model.

from AnandTech

From Legit Reviews Hardware Articles: Google Nexus 7 Tablet Review – The $200 Jelly Bean Tablet

Google Nexus 7 Tablet Review - The $200 Jelly Bean Tablet

The Nexus 7 is a no compromise Android tablet that just happens to be the first tablet designed by Google. With a stunning 7″ IPS display, a powerful NVIDIA Tegra 3 1.3GHz quad-core processor and up to 8 hours of battery life during active use, Nexus 7 was built to bring you the best everything. Read on to see the features of this tablet and to take a look at how it performs in some benchmarks!

from Legit Reviews Hardware Articles