Overview of games for Tegra: what it is and what they eat with. What is Tegra K1 capable of? ⇡ Software

Employees of the Digital Storm resource evaluated the computing and graphics capabilities of the Tegra K1 processor, introduced by NVIDIA at the beginning of the year.

Recall that Tegra K1 contains a 192-core graphics accelerator based on the Kepler architecture with support for DirectX 11 and OpenGL 4.4 programming interfaces, tessellation and the Unreal Engine 4 engine. Two versions are provided for the chip: one of them includes four 32-bit ARM Cortex A15 computing cores clocked at up to 2.3 GHz and an additional core; the second - two 64-bit Super Core (Denver) cores with a frequency of up to 2.5 GHz.


When evaluating performance, the Tegra K1 chip was considered as part of the Xiaomi Mi Pad tablet. This mini PC has been compared to LG G2, NVIDIA Shield, OnePlus One and iPad Mini with Retina display. 3DMark and AnTuTu benchmarks were used.


So, it is reported that in terms of graphics performance, the Tegra K1 processor significantly outperforms all competitors. In the 3DMark test, the Mi Pad scored 33,014 points. For comparison, the closest competitor, OnePlus One, which uses a Qualcomm Snapdragon 801 chip with an Adreno 330 graphics controller, scored 21,001 points.


The overall result of the Tegra K1 as part of the Mi Pad was 27,096 points in the 3DMark benchmark. In second place is the OnePlus One smartphone with 19,518 points. This is followed by NVIDIA Shield (16,173 points), LG G2 (15,464 points) and iPad Mini with Retina display (14,537 points).

In the AnTuTu test, the Tegra K1 processor also showed the highest result (among the named devices):


NVIDIA highlights that the Tegra K1 is the first mobile processor on the market to deliver graphics performance comparable to that of the Xbox One and PlayStation 4 and superior to that of the Xbox 360 and PlayStation 3. The company intends to use the Tegra K1 in its second-generation Shield gaming device. In addition, NVIDIA is preparing to release a tablet based on Tegra K1. It is known that the device is equipped with an 8-inch touch screen with a resolution of 2048x1536 pixels, cameras with 7- and 4.8-megapixel matrices, Wi-Fi and Bluetooth adapters, and a GPS receiver.

Perhaps, every person who is interested in news from the world of mobile technologies is simply obliged to know that in these seconds the international exhibition of consumer electronics, better known by the abbreviation CES, is taking place in distant Las Vegas. The number of new products presented is downright off scale, but in general they can be divided into two categories. The first is products whose release date is still unknown. A good example of this is that they will acquire support for applications for the operating system. However, there are other novelties that will not only go into mass production very soon, but will also qualitatively change our lives. The latter include the new Tegra K1 chip, which houses a 192-core Kepler GPU. It seems that games for devices running Android are reaching a whole new level.

Probably, most of you immediately wondered about the reason for the change in the ideology of chip names. Representatives of Nvidia, we recall, have previously used a fairly simple scheme for increasing numbers. The two previous models were called Tegra 3 and Tegra 4, respectively, and therefore many users were waiting for a logical continuation. But no, that didn't happen, and Nvidia CEO Jen-Hsun Huang had this to say:

We used this name because of the fundamental differences from previous generations. Tegra K1 is simply incomparable to its predecessors. This is the best architecture we have ever created.

However, do not rush to roll your eyes at such a huge number. The main reason Nvidia made room for a 192-core processor is, of course, marketing. 192 sounds a lot more solid than 4, doesn't it? As for the Tegra K1 CPU, it is represented by a quad-core Cortex A15.

Another notable improvement was improved energy efficiency and an increase in the allowable amount of RAM to 8 GB. In addition, the chip boasts support for Unreal Engine 4, OpenGL 4.4 and DirectX 11, which, together with all of the above, should allow you to run games on Android smartphones and tablets at the level of top personal computers or game consoles. latest generation.
The chip will be produced in two versions. The first is based on a 32-bit processor and will be part of the devices in the first half of 2014, while the second will receive a 64-bit processor of its own production. The second iteration will be released in the second half of this year.
However, if all these figures mean absolutely nothing to you, do not rush to get upset. Especially for this, representatives of Nvidia created a video, the main purpose of which is to demonstrate the capabilities of Tegra K1.

Perhaps even those who are far from the world of mobile games will be able to assess the power of the chip after watching the video.
Tegra K1 is not limited to performance on smartphones and tablets. In addition, the chip will also be able to work with Android game consoles and even cars.

The company's first mobile SoC with Kepler graphics core

Introduction

Name of the company NVIDIA is always associated with three-dimensional graphics and it is not surprising that all their solutions are considered by users and reviewers from this point of view in the first place, including mobile ones. Three-dimensional graphics in the last few years has been developing almost more actively than all other areas in IT. Remember, for example, what was available in the mid-90s of the last century, when the path of real-time hardware 3D rendering was just beginning - RealityEngine from company Silicon Graphics, with rather pathetic abilities by today's standards. The current smartwatch can do much more than the then expensive box.

And it was a professional device, and over the next few years, 3D graphics developed intensively in the consumer market - everyone remembers the company 3dfx and its competitors, most of which are now retired. But not NVIDIA- the Californian company throughout this time remains in the lead position, it was they who released the first consumer GPU(although nominally they are not, but that story with a chip from S3 was rather cloudy) in less than ten years.

But what now? A few more years have passed and we carry computing devices in our pockets, the power of which is many times higher than those desktop professional solutions of yesteryear. Mobile 3D graphics is actively developing along the path of desktop graphics and is only slightly behind it in terms of capabilities. Modern mobile GPUs are able to render complex scenes that even large gaming consoles could not handle a few years ago.

NVIDIA's Dan Vivoli gives a good example of the rapid development of 3D graphics using the example of the brand's cars Lexus different years. If the automotive industry during these years developed as rapidly as 3D graphics, then a modern car would reach speeds 1000 times higher, consume 400 times less fuel and cost 500 times less. You can estimate the numbers yourself, but that's not all - the size of the car would be smaller than the Rubik's Cube!

The comparison is comic, of course, and it would seem - why compare modern technologies 3D graphics with solutions from years ago? The fact is that even the first Tegra mobile chip will soon be six years old! How quickly time passes, and NVIDIA still had an outdated video core in mobile chips, which obviously did not work positively for their image of the leaders in graphics technologies and did not help in promoting Tegra solutions to the market.

For several years now, we have been asking NVIDIA representatives when the capabilities of the company's modern desktop graphics architectures will get into their mobile chips, not to mention their unification. It is clear that this is not an easy task and the company has been working in this direction for a long time, but all previous NVIDIA Tegra models used video cores, the basis of which was laid many years ago - even in the fourth generation of the GPU chip, functionally it did not differ much from the company's very first mobile solution.

In all other respects, there were almost no complaints about NVIDIA Tegra systems-on-chips, they always had very productive CPU cores, distinguished themselves by interesting solutions for achieving higher energy efficiency, but GPUs, although they were very productive, remained functionally at the level of many years. a long time ago, which worked well for Tegra 3, which has become quite popular in smartphones and especially tablets, but is clearly no longer suitable for Tegra 4. Moreover, it has long been clear that the unified CUDA-compatible video core of the desktop architecture Kepler- a matter of the near future for Tegra.

And now, in his fifth mobile chip, formerly known by the code name Logan, the company has finally realized our long-standing hopes. For the first time, NVIDIA revealed some data about the new generation as part of the well-known computer graphics conference SIGGRAPH 2013 in which the company always participates. This exhibition is an excellent place to showcase the graphics and computing capabilities of a mobile system-on-a-chip, of which the GPU is the most important component.

But for the announcement and disclosure of all the details about the Tegra K1, NVIDIA chose a show aimed at custom electronics - International CES 2014 taking place these days in Las Vegas. Real products based on the fifth generation Tegra should appear on the market in the first half of 2014. Let's take a look at what NVIDIA has done and changed in their mobile chip, which they have high hopes for.

Fifth Generation NVIDIA Tegra

It has long been known that the main innovations in Tegra K1 will concern its graphic core, and the rest, although it will undergo modification, but very easy, without any revelations. A GPU architecture Kepler was just the main change for which it made sense to release the Tegra K1. After all, the new GPU brings not only significantly improved graphics functionality, but also versatile computing capabilities and increased energy efficiency - another characteristic for which there were some complaints about Tegra.

It is not surprising that the possibilities GPU and are of primary interest to us. For now, let's talk about the video core used in Tegra K1, that it includes 192 computing CUDA cores of the architecture known to us Kepler, and the fact that the graphics core supports not only OpenGL ES 3.0, but also OpenGL 4.4, DirectX 11 and CUDA 6.

As for another important part in the form of universal CPU-nuclei, then there have been much less radical changes. Of course, someday NVIDIA will also make 64-bit CPU cores, but architectures ARMv8 not yet in Tegra (see addition at the end of the article), these are still the same four Cortex-A15 with two megabytes of cache memory of the second level and the fifth auxiliary core - "companion" (also Cortex-A15), designed to work in economy mode with a low computational load.

An important change in the CPU-part of the new chip is that these are not exactly the same Cortex-A15 that were used in Tegra 4. Firstly, based on the experience of designing Tegra 4, NVIDIA engineers were able to improve the performance of Tegra K1, and secondly, all 4 + 1 universal A15 cores have a revision R3, while Tegra 4 used less advanced R2. Benefits of modified R3-cores are architectural changes aimed at reducing the required operating voltage and power consumption, which translates into better energy efficiency, respectively, which we will definitely talk about later.

Continuing to consider the characteristics of Tegra K1, we note that the camera module has a powerful dual image processor ( ISP), which has a capacity up to 1.2 gigapixel and supporting sensors with resolution up to 100 megapixels. The module for working with image output devices supports resolutions up to Ultra HD, also known as 4K(which was also in Tegra 4, however) and for the built-in display and external ones connected via HDMI 1.4a, and at the same time, the chip also supports decoding data in the format H.265 in this resolution. Of the I / O ports, we note the support for a pair of connectors USB 3.0.

The new NVIDIA chip is manufactured by 28nm "HPM" process technology at the factories of the Taiwanese company TSMC, in contrast to the 28 nm “HPL” process technology of the same company, which is used for Tegra 4. We already wrote that TSMC has several types of 28 nm process technology, and “HPM” for mobile chips is suitable for several better as it helps achieve higher clock speeds and the "HPL" is optimized for low leakage. Probably, this is also why the maximum frequency of Tegra 4 computing cores was limited to 1.9-2.0 GHz, and in the case of Tegra K1, CPU cores will operate at a higher clock frequency up to 2.3 GHz.

Kepler Graphics Architecture

Finally, the time has come when the best of the existing graphics architectures from NVIDIA came to Tegra mobile chips. I would like this to happen even during the release of Tegra 4, of course, but then something did not work out and the mobile chip with Kepler inside was not yet ready for production.

But now everything is fine, if the outdated video core in Tegra 4 was accompanied by a desktop Fermi with better functionality, then starting with Kepler, NVIDIA has made a qualitative leap in the functionality of mobile GPUs, which is planned to be supported in the future, according to the diagram above. NVIDIA claims that the architecture Maxwell has already been developed taking into account future applications in mobile chips, and then two different lines will then finally merge into one.

Interestingly, during the design of the first generation of Kepler, this architecture was not yet planned to be used in mobile chips of the Tegra family, but when creating the second version of the architecture (GK2xx video chips), NVIDIA decided to do so. So, what does Tegra K1 offer us in terms of the capabilities of the video core of the Kepler architecture? Mobile GPU has a codename GK20A and consists of 192 CUDA cores, has a unified second-level cache, dedicated blocks for processing geometry and tessellation, as well as blocks ROP.

If you do not remember the Kepler architecture, then all GPUs of this family consist of one or more graphics processing clusters GPC, which have their own rasterization engines and can contain one or more multiprocessors SMX. And these multiprocessors, in turn, contain 192 computing cores, engines for processing geometry and tessellation, and texture modules. TMU.

It is clear that the mobile GPU in the Tegra K1 so far simply cannot contain too many execution units, so the engineers decided to get by with one GPC cluster containing the same SMX multiprocessor. It consists of 192 computing cores, one rasterization and tessellation unit, four ROP units, eight TMU units, a shared cache memory and other functional units.

Is it a lot and with what is it comparable from well-known desktop and laptop solutions? We can say that the GK20A in the Tegra K1 is equal to half the processing power of a desktop video card GeForce GT 640 on the base GK107, which has exactly twice as many streaming CUDA cores. In other parameters, the mobile GK20A is four times inferior - it has four times fewer TMUs and ROPs. But do not forget - we are comparing a mobile GPU with a desktop solution, albeit a rather weak one! Moreover, the clock frequency of the GPU in the Tegra K1 is quite high and is 950 MHz, which is even slightly higher than the frequency of the video chip in the GeForce GT 640.

As for similar video cards for laptops, the GK107 has an improved brother GK208, which is closest to the mobile GPU in Tegra K1. The GK107 and GK208 have two SMX multiprocessors with 192 CUDA cores each, while the GK208 has some additional computing capabilities and reduced video memory bus width from 128-bit to 64-bit. This GPU underlies several notebook graphics cards: GeForce GT 720M, 730M, GT 735M and GT 755M, as well as desktop solutions: GeForce GT 635 and second revisions GT 630 and GT 640.

Naturally, the mobile GPU had to be seriously reworked and modified. Mobile devices don't need too much a large number of blocks ROP and TMU, not to mention the geometry processing and tessellation blocks, which will be used only to a small part of their capabilities. Even more important are other changes that have benefited energy efficiency. For example, in GK20A communications between GPU blocks were seriously simplified, since for a mobile version with one GPC and one SMX there is simply no need to transfer data and balance work between different SMXs and clusters, so part of the control logic was eliminated and, in general, the mobile GPU became much simpler and more energy efficient .

But this did not affect the functionality of the video core, which is fully consistent with the capabilities of the desktop Kepler. Including the possibilities of processing geometry and tessellation - splitting geometric primitives into smaller ones. tessellation is a big advantage of Kepler and is especially important for mobile devices, as it can provide more efficient processing of complex geometry, compared to traditional methods.

For example, dynamic generation of terrain and/or water surface geometry at a given level of detail can provide a multiple (according to NVIDIA - more than 50-fold) performance increase, compared with a direct increase in the number of triangles, which is available on outdated video cores with only OpenGL support. ES 2.0.

You already know from PC games that tessellation can improve the realism of the generated image if done right. This method allows you to add detail to objects in a 3D scene, especially noticeable on the edges, makes it possible to correctly calculate shadows and global illumination, while geometric data is dynamically added only where they are needed.

As one such example, NVIDIA presents a fully tessellated terrain demo using OpenGL 4, in which the Tegra K1 GPU handles several million polygons per frame with ease at a steady 60 FPS. You can also recall another similar demo Island, redesigned for mobile GPU.

Or remember about three years ago there was such a demo benchmark program stone giant from BitSquid and Fatshark, which ran on graphics cards with hardware support for Direct3D 11 and tessellation? So now it works perfectly on the Tegra K1 mobile chip in full resolution, but before, not every desktop GPU could cope with it!

Supported by the new NVIDIA mobile chip are other possibilities that the Kepler architecture has opened up. For example, like all desktop GPUs, the new product supports the so-called bindless texturing. In previous graphics architectures, the texture binding model provides support for simultaneous work with only 128 textures, which are allocated their own fixed slot in the binding table, and Kepler introduced "unattached" textures, when the shader program can access textures in memory directly, without using the binding table :

This solution increases the simultaneous number of processed textures in one shader program to more than 1 million, which allows increasing the number of unique textures and materials in one scene and can be used in techniques similar to the well-known MegaTexture used in the engine ID Software. In addition, bindless texturing helps to reduce CPU usage during rendering by reducing the time required for the video driver to process requests.

Like its desktop counterparts, the mobile Kepler architecture GPU makes extensive use of information compression in various parts of the graphics pipeline. There are a lot of places in it where compression and buffering are used, which help to make more efficient use of video memory and its bandwidth, which is always in short supply.

Texture compression is even more important for mobile devices than for PCs, since mobile chips have narrow memory access buses, and memory chips operate at a lower clock speed, resulting in a lack of bandwidth. Therefore, NVIDIA paid a lot of attention to efficient data compression and data buffering between different GPU units, as well as other methods to save memory bandwidth.

So, in the Tegra K1 video core, primitive discarding is used ( primitive culling), hierarchical Z Cull, compression Early Z and Z-buffer, color buffer compression and texture compression by several methods: DXT, ETC, ASTC(Adaptive Scalable Texture Compression).

ASTC is a block lossy image compression algorithm developed by ARM that is a common standard and official extension for OpenGL and OpenGL ES. This method has advantages over DXT and ETC, supports blocks of different sizes and formats (including HDR), provides better image quality and significantly saves memory bandwidth.

As an example, this NVIDIA illustration shows the image compression ratio of a typical user interface with the Kepler mobile video core, which achieves compression between 43% and 76%.

With the introduction of the video core of the modern Kepler architecture, Tegra K1 added other features not directly related to 3D graphics. For example, GPU-accelerated 2D graphics rendering, known as path tracing ( path rendering). This method defines a two-dimensional drawing as a sequence of screen-resolution-independent outlines (paths) that can be filled with a color, gradient, image, etc.

Unlike bitmaps, path tracing allows you to rotate and scale an image without pixelation artifacts or other such issues, and will come in handy when working with PostScript, PDF, SVG, Flash, TrueType and OpenType font rendering, vector images, and canvases. (canvas) HTML 5.

Now all operations for drawing vector images are performed on the CPU, but GPU-accelerated support has its advantages: the video core is able to filter the image efficiently and quickly (including anisotropic filtering), programmable shading is available on it (for example, complex bumpmapping, slow on the CPU), fast image blending is supported. Other advantages include: no need to render an image to a texture, a mixture of vector and 3D objects in one buffer, much higher performance and reduced load on the CPU cores.

NVIDIA believes that many 2D graphics applications will soon move the path rendering work performed on the CPU to more efficient video cores, gaining flexibility and performance advantages. At a special NVIDIA event, journalists were shown a demonstration of this technology on a regular tablet with a Tegra K1, where they launched the CPU and GPU versions in turn. Indeed, rendering, scaling and rotating the content of HTML pages and SVG images was carried out many times faster on the GPU, which was noticeable to the naked eye.

Tegra K1 Graphics Processing Capabilities

So, the graphics capabilities of the Tegra K1 video core correspond to that, since the mobile GPU core of the released Tegra chip is based on the latest graphics architecture from NVIDIA. The Tegra K1 video core supports both OpenGL ES 3.0 with OpenGL 4.4, and all possibilities DirectX 11, including tessellation, as well as NVIDIA CUDA 6.0 and OpenCL.

Most importantly, the new mobile GPU is much simpler and less power hungry than its older brothers. The mobile video core supports everything the same as the GeForce GTX Titan, but consumes a hundred times less energy! NVIDIA lists chip consumption as 2 W, but it is likely that in some resource-intensive tasks, typical consumption will be in the order of 2.5-3W, which is typical for powerful mobile SoCs designed for tablets and large smartphones with 5-inch (or more) screens.

We have already cited some important developments in the development of 3D hardware graphics above: the introduction of RealityEngine in the 90s, the release of the first gaming GPU with hardware accelerated geometry processing from NVIDIA in 1999, the introduction of programmable shader programs in 2001 and the release of GPGPU-based GPUs in last years. Each step provided new opportunities, and the appearance of a video core of such an advanced architecture in mobile solutions is an important event for NVIDIA.

The capabilities of the new GPU will be revealed not only in gaming applications with complex 3D graphics, but also in image processing, SoC applications in cars and other tasks, which we will discuss later. But it is natural that the greatest effect on users is the new possibilities of 3D graphics.

NVIDIA has prepared a new demo program for the first mobile Kepler that uses unreal engine. Of course, there are some pre-computed things in it, but the main part of the lighting is calculated in real time, HDR rendering and complex materials are used.

The result looks very impressive - even the creators of the engine from epic games- Mark Rein, for example. Rendering complex scenes that previously required a fairly powerful desktop GPU now works well on the new mobile chip, which is great for high-end smartphones and tablets:

New opportunities open up for developers of mobile 3D applications on such powerful systems, because graphics quality becomes available almost like in such PC games as Watch_Dogs, Battlefield 3/4, Assassin's Creed IV.

All this will be available in the near future on mobile (and not just mobile, but also pocket!) devices with Tegra K1. The powerful Kepler video core offers a large number of new features not previously available on similar devices:

Tegra K1 graphics provide flexible programmability and superior performance for mobile devices, making it easy to create high-quality 3D images and transfer gaming applications from "adult" platforms: PC and consoles. In the near future, even on mobile systems, such effects and algorithms as tessellation, realistic PhysX physical effects, complex lighting (including global) and post-processing, and even ray tracing will be available.

From the previous mobile chip to the Tegra K1, there was a decent jump in 3D performance and a staggering jump in functionality. This jump can be estimated by comparing the graphics in the benchmark GLBenchmark Egypt with a face from the famous demo program from NVIDIA.

This is a demo program for simulating a human face and facial expressions already known to you - FaceWorks Ira. One of the most detailed human face simulations, which was first shown a few months ago as a demonstration of the best desktop solutions, now works on mobile devices with Tegra K1.

Of course, the mobile version of the demo program required some simplifications, because a video chip with a consumption of 2-3 W is not yet capable of processing 5 trillion floating point operations per second. It is important that all the basic effects remain: full-fledged HDR rendering, FXAA anti-aliasing and subsurface scattering, which simulates the propagation of light through translucent human tissues. Shaders have been simplified in the mobile version of the demo, using fewer off-screen buffers, lower resolution textures, and only one pass. But although the picture quality has somewhat decreased, it remains very high for compact solutions, unattainable before.

Some of the readers will definitely object that serious games that require high-quality 3D graphics are almost never played on mobile devices, but data from App Annie and Flurry Analytics say that users Google Play Store even in 2012, they spent 76% of the total amount spent on software purchases on games, while non-gaming software got only 24%. What's more, when using mobile devices like tablets, their users spend 67% of their time playing games and only the remaining third is spent on all other activities.

Looking at the shares of game developers targeting various gaming platforms, 55% of all developers develop games for mobile devices, which is even more than for PC (48%), not to mention 5-13% of companies developing games for various desktop consoles. It is clear that this is largely due to the more affordable financial development for mobile devices, but this does not change the facts - most game developers are aimed at mobile devices.

And there are more data and forecasts DFC Intelligence that the market for mobile gaming applications is growing faster than all other gaming platforms. So the possibility of introducing high-quality 3D graphics may well be in demand in mobile games. Even those who are still content with a 2D picture, as it was before on adult gaming platforms.

But how to develop high-quality games for mobile devices, if there is a lot of specialized software for game developers on PC and consoles, and all this is in its infancy for smartphones and tablets? After all, in many ways, this is why games on Android are not as impressive as for game consoles. To help developers in their difficult task, NVIDIA has long been developing a variety of additional software that is useful in creating 3D games.

These utilities are now also available on mobile devices - the same programs as on a PC. For example, the famous package gameworks will work in the case of Tegra K1 with Kepler video core, and it includes SDKs, algorithms, effects, engines and libraries known from PC, such as VisualFX SDK, Core SDK, Graphics Lib, Game Compute Lib, Optix and PhysX. Moreover, NVIDIA has special utilities for catching errors in mobile applications in Visual Studio and NVIDIA Nsight Tegra- specialized debugger for mobile graphics. All this is already quite functional and is used by developers:

Above package gameworks employs more than three hundred NVIDIA employees, it consists of: VisualFX SDK - a set of complex realistic effects, Graphics Lib - examples of effects with documentation and training examples, PhysX SDK - the most common physics engine used in more than 500 games, Game Compute Lib - examples compute shaders on CUDA, DirectX and GLSL, Optix SDK - ray tracing engine, etc.

NVIDIA Developer Utilities includes everything you need to build Android gaming apps in a familiar environment visual studio, a specialized version of the debugger is used for this NVIDIA Nsight Tegra. Full support for NVIDIA utilities on GeForce desktop solutions and the Tegra K1 mobile chip means that PC and mobile programming are treated the same way and relatively easy to port between the platforms.

Not surprisingly, NVIDIA's new solution and developer tools have already received enthusiastic support from developers, and the company is collaborating with all major game engine creators: CryEngine, Unreal Engine, id tech 5, Frostbite, Unity, Source and others. They have access to each other's source codes and port render engines to a new mobile platform, NVIDIA helps in optimizing performance and introducing new technologies, including physical effects PhysX.

Such support is very important, because the same Unreal Engine is used in more than 300 games (just a few recent examples: Bioshock Infinite, Hawken, Borderlands 2, Mass Effect 3, Gears of War 3, Batman Arkham City) and is the most commercially successful game engine years gone by. And the fourth version of the engine Unreal Engine 4 even more perfect, including in terms of the quality of graphic effects.

This engine is aimed at recent current generation consoles (PlayStation 4 and Xbox One) as well as modern PCs, it uses deferred ( deferred) rendering, tessellation, compute and geometry shaders, bindless texturing and ASTC texture compression, physical effects, and in system requirements the engine has mandatory support DirectX 11 or OpenGL 4.4.

Therefore, on mobile chips without support for OpenGL 4, games based on Unreal Engine 4 simply will not work. But the capabilities of the Kepler graphics core in the Tegra K1 mobile chip far exceed the OpenGL ES 3.0 specifications. Perhaps for the first time in a mobile chip, everything (absolutely everything, taking into account only lower performance, but not capabilities) is possible that we have already seen on desktop PCs, so gaming applications based on Unreal Engine 4 can also be made for mobile devices based on new NVIDIA chip.

At the event for journalists showed some kind of demonstration Unreal Engine 4 Demo Shooter, which looks quite impressive on mobile devices, features complex graphical and physical effects. But this is all a matter of the near, but still future. And porting some PC games to Android is likely in the near future, as NVIDIA Tegra K1 fully supports OpenGL 4.4 and some of the games can be transferred very easily.

For example, the game Serious Sam 3: BFE, which was released on PC in 2011, has already been ported to Tegra K1 without any graphical simplification and the entire porting process took only a few days! This happened due to the fact that the game engine was initially focused not only on DirectX, but also on OpenGL 4, and the Kepler video core in Tegra K1 has such support. But even if the game renderer was originally written for Direct3D, then porting it to work on the mobile Tegra K1 chip is not so difficult, since NVIDIA has provided developers with all the necessary utilities.

Game developers, who were among the first to receive the Tegra K1 and have already started programming for the new mobile GPU, are very impressed with its capabilities. One such developer is 11 Bit Studios, which are known from the game Anomaly 2 in style tower defense, which was released on PC back in May 2013 and was ported to Android in the fall. The current mobile version uses OpenGL ES and has some simplifications compared to the desktop version, but the Polish company has already made a version for Tegra K1 that uses OpenGL 4 and lacks any simplifications at all!

Indeed, the demo version shown to journalists on a tablet with Tegra K1 has very good graphics for portable devices. Moreover, at maximum game settings, the Tegra K1-based device provides 60 FPS with particle systems, effects PhysX(stones, smoke, etc.), HDR rendering and post-processing effects. Google Play has a separate benchmark ( Anomaly 2 Benchmark), which is also promised to be optimized for Tegra K1.

Polish developers from 11 Bit Studios claim that they spent very little time porting the game to Tegra K1 compared to the regular Android version - just a few days versus several months. After all, if in the usual OpenGL ES version it was necessary to remove or rewrite some effects and algorithms and simplify resources (prepare low-resolution textures and less complex models), then the full-fledged Kepler in Tegra K1 allows you not to waste time on such simplification, which greatly facilitates porting.

Porting "tabletop" games to portable devices has never been easier, and more examples can be expected in the future. Another PC game with great graphics that ported to Android and looks great on the NVIDIA Tegra K1 is a logical-physics platformer Trine 2. The mobile version of this game, shown on a tablet with the new NVIDIA chip, looks no worse than on older gaming platforms!

It should be noted that recently the value of the graphic OpenGL API, which is used by all Android mobile devices, Linux and Apple PCs, as well as some of the desktop game consoles. And with the exit SteamOS by Valve- an operating system based on Linux and designed for Steam-games, the position of OpenGL can be strengthened even more. And this only plays into the hands of NVIDIA, which has always paid special attention to supporting this API.

Versatile Computing

But not only graphics processing is required now from the current graphics processors, even mobile ones. NVIDIA competitors have been flaunting inscriptions for quite some time OpenCL in the lines of specifications (however, the practical sense of this is still not visible). In the case of NVIDIA, everything is better, since they have the most experience in GPGPU- they have been producing the corresponding chips for eight years already!

Yes, a lot of time has passed since 2006, and G80 then it was the first video chip aimed at GPGPU computing. Since then, many architectures and generations have changed, and now in the Tegra K1 mobile chip, the video core is based on exactly the same Kepler architecture as its desktop counterparts. Moreover, this is an absolutely full-fledged Kepler, as for universal non-graphical calculations - in terms of a set of commands, it 100% compatible with GTX Titan and other video cards, has the same caching capabilities: configurable 64 KB of memory between the shared (shared) and the first level cache, etc.

And by the rate of double precision calculations ( FP64) the NVIDIA mobile video core is in no way inferior to budget solutions of the Kepler architecture (known under the code names GK10x) - such calculations are performed at a speed 1/24 regarding single precision calculations. NVIDIA has invested a lot of time and money in universal computing, and it is they who offer the industry's best support for GPGPU: programming languages, libraries, and other utilities for developers:

And with the release of Tegra K1, their CUDA parallel computing platform runs on all of the company's solutions, from mobile Tegra to professional calculators Tesla. Developer Utilities CUDA6 Developer Tools support all modern graphic solutions of the company, as well as other software: NVIDIA Nsight Eclipse Edition, Visual Profiler, Cuda-gdb, Cuda-memcheck and much more.

Desktop GPUs have been able to do much more than just render 3D scenes for quite some time, and finally, GPGPU computing has come to NVIDIA mobile solutions. Yes, here they are a little behind their competitors, who have long announced the possibility of universal computing on their GPUs, but only things are still there - we don’t remember a single noticeable case of using GPGPU in mobile devices. But NVIDIA's experience and relative success in other markets allow us to hope for a wider distribution of such features.

With the advent of the Kepler-based video core in the Tegra K1, computationally demanding capabilities such as face, speech and image recognition, computer vision, advanced real-time image processing, more advanced augmented reality systems, changing the depth of field and refocusing on still images and many other things that we still cannot even imagine and that can make a real revolution in mobile devices.

One of the most relevant applications for the greatly increased and qualitatively improved computing capabilities of NVIDIA Tegra K1 is computational photography, which we already mentioned in the Tegra 4 review. The fifth edition of the company's mobile chip supports the second version of the computational photography engine Chimera 2, which has some changes and improvements.

The engine itself has improved rather quantitatively, and a qualitative leap is observed in the image processing capabilities of GPU computing cores, which have become more numerous and more convenient to program for them. After all, Kepler is a flexibly customizable, highly efficient architecture for parallel processing of huge data arrays. And from the point of view of computational photography, we need a fast connection between the GPU cores and the Chimera 2 engine, that is, tight integration of the capabilities of these blocks in Tegra K1.

New Tegra Mobile Chip Includes Dual Engine for Image Processing ISP the next generation (compared to what was in Tegra 4, apparently). Each of the two ISPs is capable of processing up to 600 megapixels per second, giving an overall performance of 1.2 gigapixels/s. The ISP blocks support digital camera modules with resolutions up to 100 megapixels, and the color depth can be up to 14-bit per pixel.

Why would you need such power at all? In addition to typical tasks like high-quality noise reduction, scaling and color correction, you can immediately come up with tasks such as local tone mapping(the process of converting a larger range of brightness to a smaller one) in real time at 30 frames per second, shooting high-resolution panoramas in real time, the task of stabilizing when shooting video, and support for a huge number of focus points - up to up to 4096 pieces on a 64 by 64 grid, which is useful for tracking objects moving in the frame in all directions: horizontally, vertically and diagonally. And all this with very high performance and relatively low power consumption, as the Chimera 2 engine is optimized for such calculations.

But not only performance worries the user when shooting photos and videos. Even more questions arise about the possible improvement in image quality. And in addition to the already listed algorithms for high-quality noise reduction, tone mapping, scaling and color correction, NVIDIA claims the advantage of their solution in terms of image quality captured from sensors, which is expressed in a better signal-to-noise ratio, that is, a lower noise level:

Although it is very difficult to talk about the quality and generally about any positive difference from the images provided on the slide, NVIDIA assures that the pictures were taken in a specially equipped studio with a calibrated light source in low stage lighting (18 lux) and a new ISP3 under such conditions, it has an advantage over the old one, which was used in Tegra 4.

The journalists were shown a demo program for processing video from the camera - local tone mapping for a real-time image captured by the camera at 30 frames per second, and for such resource-intensive operations, it is necessary to use the GPGPU capabilities of the powerful video core of the Kepler architecture. And although the demo was not the most impressive, but it clearly uses complex calculations - and this is up to the developers, who are given impressive opportunities by Tegra K1.

By the way, the computing architecture of Chimera 2 in Tegra K1 provides a pipeline for the simultaneous use of CPU and GPU resources. Those tasks that are better executed on the GPU can be given to the video chip, and part of the calculations with many branches and conditions can be performed on the CPU, and these commands can be effectively mixed to achieve one goal.

The capabilities of Chimera 2 are important not only for photography and video shooting, but also for computer or machine vision ( computer vision), because it uses very complex algorithms for object recognition and other tasks. In particular, this is automotive computer vision: recognition of pedestrians, dividing lanes and other markings, recognition of road signs and other objects, monitoring of “blind” zones in rear-view mirrors, etc. tasks.

It must be said that the use of Tegra in cars has already become quite successful for NVIDIA, and many car manufacturers use mobile chips from the Californian company in their car models - the total number of cars sold with Tegra chips embedded in them has reached several million.

True, so far it has not reached the advanced use described above, but systems-on-chips in cars are used in navigation and entertainment systems, to display information on the windshield, and even in the form of a fully digital instrument panel, when all devices with arrows and numbers are rendered in real time, replacing the usual "warm tube" dials.

Judging by the demos shown by NVIDIA, the digital instrument panel may well attract fans to change everything in their cars to their taste, and it will give automakers the opportunity to save money on materials, because instead of expensive metal processing, you can simply draw its digital model on the LCD screen. For those to whom this seems like a prank, there are opportunities like defining marking lines and signs - the demonstrations shown worked well, although these capabilities can hardly be called completely new.

Evaluation of performance and energy consumption

As usual, we will try to give all we have on this moment information about the performance and power consumption of the new Tegra K1 chip. For obvious reasons, so far only official data from NVIDIA itself is available - after all, from a party interested in the good results of the new product, so they should be treated with a healthy dose of skepticism.

With performance CPU-parts of Tegra K1 everything is clear now - the same Cortex-A15 cores, the speed characteristics of which have long been known. The only change compared to the previous version of NVIDIA's SoC is that the clock speed of the CPU cores in the Tegra K1 can reach 2.3 GHz- moreover, for all cores at the same time, the chip does not have a turbo mode, in which one core operates at a higher frequency (apparently, the companion core remained optimized for low power consumption, in the first place). Let's look at the first data on the energy efficiency of the improved third revision Cortex-A15 universal computing cores.

In this diagram, NVIDIA compares two of its solutions from different generations: Tegra 4 and Tegra K1. In the CPU part, the differences between them are in different "revisions" of the processor cores: R2 and R3, respectively. Plus, the difference in consumption is complemented by the use of a different technological process - 28 nm "HPM", optimized for mobile chips, and other modifications.

A new revision of the CPU cores and a different technical process have significantly improved the energy efficiency of the new product compared to the previous Tegra. At the same level of power consumption by the cores, the fifth Tegra model provides performance growth by 40% in the benchmark SPECint2k. And computing performance equal to the speed of the previous chip is achieved at more than half the level of power consumption.

The result is very good, but it was only a comparison with the previous version of the mobile system-on-a-chip from NVIDIA, but what if we compare energy efficiency with competitors? After all, many of them have gone far ahead, Qualcomm several new solutions came out, not to mention Apple with their unique 64-bit chip Cyclone.

At least a comparison of NVIDIA itself in the benchmark Octane shows that Tegra K1 becomes the most energy efficient mobile chip in the CPU part. The new NVIDIA chip at different frequencies and voltages is clearly more energy efficient than the recent top Qualcomm Snapdragon 800 and according to this most important parameter, the chip Tegra K1 turned out to be approximately on the same level with a 64-bit chip Apple A7 operating at a frequency of 1.3 GHz. NVIDIA's solution is even slightly better, but the difference is negligible.

Such a high energy efficiency of CPU cores can clearly help the company regain positions that have been somewhat lost since the days of the Tegra 3 chip, which was widely used in smartphones and tablets. But so far we have only talked about CPU cores, but what about the performance and energy efficiency of the GPU core, which interests us even more? Let's look first at the theoretical comparison Tegra 4 and Tegra K1:

Actually, there is nothing unexpected in the data; in terms of peak numbers, the new GPU significantly outperforms the outdated Tegra 4 video core. If the twofold difference in speed (per clock) is not so noticeable in terms of texturing, and the increase in overall mathematical power has not reached a threefold value, then the difference in speed rasterization and geometric performance is much more impressive. In addition, we should not forget that Kepler computing cores are much more advanced and flexible, and a direct comparison of the amount of cache memory is not entirely correct, since the L2 cache in Tegra K1 is flexibly configured and useful in a wide range of tasks.

In general, according to the theory, there is just a huge difference between Tegra 4 and Tegra K1! Yes, and the improvement in energy efficiency is expected to be very impressive, but we will talk about this later. And now let's see how close the Tegra K1 GPU gets to the capabilities of ... the previous generation of desktop consoles - the same ones that many gamers still play today.

Of course, the comparison is not the simplest and most correct, since the architecture of the consoles is very different from both the PC and the mobile in Tegra K1. For example, the memory bandwidth for Xbox 360 for 10 MB of special memory is much higher - 256 GB / s. Otherwise, the Tegra K1's GPU is about the same as previous generation consoles. In almost all theoretical peak parameters, the new NVIDIA mobile chip is no worse PlayStation 3 and Xbox 360, except for the memory bandwidth (even without taking into account the fast 10 MB of memory in the Microsoft console) and texturing speed.

Even the comparison of mathematical performance is not so clear, because in the case of the Sony console, more powerful additional CPU cores are not taken into account, on which part of the work of the GPU is shifted, although such low-level programming is available only to selected developers. In the case of Tegra K1, programmers will see the architecture already familiar to them from the PC. Kepler, all the possibilities and features of which have been studied. In general, everywhere there are pluses and minuses and, judging by the theoretical figures, the chip Tegra K1 may well compete with the GPU and CPU installed in desktop consoles: PlayStation 3 and Xbox 360. And in some respects, such as the amount of available memory and GPU capabilities, it completely surpasses them.

All this makes Tegra K1 the most powerful gaming platform, one of the best among mobiles. Of course, you need to remember that this comparison is purely theoretical, consoles have their advantages, including a single hardware configuration, for which it is easy to optimize the game code as much as possible. In addition, games for consoles are developed by the best companies in the business, who have both vast experience and the necessary knowledge and skills to squeeze everything they can out of the hardware features. So don't expect console-quality games on mobile, at least not anytime soon. But NVIDIA has its own advantages over other mobile solutions: excellent software support for game developers and a gaming platform TegraZone so they have a chance of success.

If we talk about specific performance numbers, then at the NVIDIA event, journalists were shown some performance numbers in graphics benchmarks and demonstrated the performance and energy efficiency of the new Tegra K1 in practice. Slide with benchmark numbers GFXbench 2.7.5 shows a clear superiority of the GPU core Tegra K1 before Adreno 330 in Snapdragon 800 and PowerVR G6400 in the Apple A7 tested in similar form factor 7-9 inch tablets.

Even from the outdated GFXbench 2.7.5, which uses old algorithms, effects, and even APIs and does not cover the new features of Tegra K1, it is clear that the advantage of the new NVIDIA over strong rivals is more than twofold! In more modern 3D tests, the advantage of the modern NVIDIA graphics core should increase even more.

But maybe the Tegra K1 video core consumes too much power? Kepler itself is known to be a very energy efficient architecture, and we've already talked about data caching and buffering, aggressive data compression at many points in the graphics pipeline, Z-buffering optimizations, and so on. And even Kepler architecture desktop video chips have many specific features aimed at increasing energy efficiency.

But for mobile chips, this is not enough. For mobile Kepler, a multi-level change in clock frequency and voltage is applied, there are two levels of disabling currently unused GPU functional devices ( power gate), additionally optimized inter-chip connections (we already talked about the lack of communication between different SMX multiprocessors, since there is only one in the GK20A), and also introduced special modes of operation in idle, with low and high load of execution units.

These additional measures have led to a significant improvement in energy efficiency, even compared to fairly economical "laptop" versions of the Kepler architecture chips - consumption has decreased by more than half: five to two watts. It is likely that similar capabilities will initially be used in future solutions based on the architecture. Maxwell, and the improvement in energy efficiency achieved in mobile Kepler will help future desktop GPUs as well.

At the NVIDIA event, journalists were shown a special stand for measuring the energy consumption of mobile devices, to which various tablets and smartphones were connected, and according to these data, the tablet with Tegra K1 also turned out to be significantly more energy efficient than all competitors, among which were the latest models from well-known manufacturers. Let's compare energy efficiency Tegra K1 with Snapdragon 800 and Apple A7 already in the brand new GFXBench 3.0 3D performance test:

So, if we limit the power supply of systems-on-chips entirely within the framework of 2.5W(typical consumption of high-end smartphones under high load) and bring the performance of Tegra K1 to those of its competitors in this test, it turns out that with the same performance, Tegra K1 consumes noticeably less energy in this 3D test compared to PowerVR G6400 from Apple A7 running iPhone 5S and Adreno 330 from Qualcomm Snapdragon 800 (on Sony Xperia Z Ultra).

The tests were performed on a specialized stand NVIDIA, recall. According to NVIDIA, the mobile Kepler as part of the new chip is 1.5 times more energy efficient than the latest Apple system-on-a-chip at the same speed and 1.5 times more efficient than the Snapdragon 800, but at reduced consumption. In other words, if we proceed from the company's data, then the main competitors at the moment turned out to be defeated - an excellent result! It seems that NVIDIA has finally put a worthy graphics core into its mobile chip, surpassing its competitors in all respects, both in functionality and performance.

findings

The market for mobile systems-on-a-chip designed for mobile devices is developing very quickly, and mobile chips are becoming more functional and faster with each generation. And this is against the backdrop of a relative stagnation in the desktop solutions market, which, if not narrowing, has stabilized in terms of sales. Mobile SoC manufacturers are releasing more and more powerful and advanced solutions, while trying to keep energy consumption within acceptable limits - high energy efficiency in mobile products has always been the most important parameter.

And those mobile chips that do not provide high efficiency and lag behind in technology by the time they are released simply do not find wide application on the market. For example, not the most successful attempt can be called NVIDIA Tegra 4. From the point of view of CPU performance, everything is very good in this chip, and the GPU core is quite productive. But in terms of its capabilities, it is not far from the company's very first mobile solution. Yes, some modifications were made to it, but the basis did not change for several years, which was simply indecent for a recognized leader in the GPU market. From NVIDIA and in the mobile sector is always expected the best solution in terms of 3D capabilities, at least in no way inferior to competitors.

Plus, Tegra 4 didn't really stand out among many similar solutions in terms of other characteristics, maybe announced later, but actually becoming available at about the same time - that is, the previous NVIDIA chip was clearly late for the market. And we really hope that this time absolutely all the shortcomings of the previous solution have been eliminated. At least from a technical point of view, he became much better, especially in its GPU part.

As for the CPU, the four universal Cortex-A15 kernels (as well as the fifth companion kernel) of a more advanced revision R3, additional optimizations and transfer of production to the technical process 28 nm "HPM", made it possible both to increase the performance of the CPU part compared to Tegra 4 (the maximum core frequency increased to 2.3 GHz), and to significantly improve the energy efficiency of the new solution. Preliminary tests by NVIDIA speak of high performance and energy efficiency of the cores in the new chip, although for final conclusions it is better to wait for independent comparisons with the best of the competitors in our test lab.

Perhaps NVIDIA is a little behind other industry leaders in mastering 64-bit ARM cores in order to be the undisputed leader in everything, but so far 4 GB supported memory is enough, this "lag" is not too relevant. Yes, and for the full use of the 64-bit command system ARMv8 software support required Google and other software vendors. So far, only Apple, which solely owns its own infrastructure - not only hardware, but also software ( operating system and other software).

The main thing is that NVIDIA finally got rid of the outdated graphics core and finally managed to integrate its most advanced architecture into the mobile chip Kepler. This alone already makes Tegra K1 one of the likely leaders in 2014 in the market of mobile systems-on-a-chip designed for top solutions. Not only does the new mobile GPU offer all the features of desktop solutions that exceed the functionality of competitors in many ways, but they also have the best developer utilities in the industry, as well as an excellent program of cooperation with game creators and have their own gaming platform. TegraZone.

Judging by the functionality and performance of the Tegra K1 graphics core in preliminary tests from NVIDIA, it is this chip that can enable the porting and development of gaming applications with a quality similar to console projects for previous generation game consoles: PlayStation 3 and Xbox 360. Yes, and with many PC games of yesterday, four Cortex-A15 cores and the most powerful Kepler graphics core will do just fine.

The novelty will allow the use of complex geometry with tessellation, advanced PhysX physics effects, complex shaders and texturing, post-processing using compute shaders and much more in mobile games - everything that we are used to on gaming PCs. Serious games of the recent past are easily transferred to Tegra K1 in a few days or weeks, as we see in the example Seroius Sam 3 and Anomaly 2. And taking into account TegraZone, as well as a high probability of the release of subsequent versions of the pocket console NVIDIA Shield, it is these games that can become one of the reasons that will motivate users to purchase powerful mobile devices.

This time, NVIDIA has come up with a mobile chip that is much more interesting both from a technological and market point of view, compared to the previous generation. The new version of Tegra features slightly more powerful CPU cores, a much more powerful and functional GPU core, which at times better than that, which we saw in the Tegra 4. And with all these improvements, the energy efficiency has even improved - the new chip consumes no more power than the Tegra 4, and the power of its GPU has increased significantly. It seems that this time NVIDIA got a successful system-on-a-chip, sort of "Tegra 4 done right"- it seems to us that the previous chip of the company, released last year, should have been exactly like this - and then it would have won a much larger market share.

Everything seems to be very good, but there are still a couple of questions. Have you noticed that there is not a word about the modem part and other wireless interfaces in the material? And all because there are no changes there, there is no built-in modem part in Tegra K1, and external new chips Icera was not presented. Yes, it is not necessary, in fact, because the latest modifications of the chips Icera support LTE, and Tegra K1 is supposed to be used not only in smartphones that require cellular support, but also in other devices, such as tablets and game consoles. Something else is interesting - NVIDIA cautiously hints that ready-made smartphones and tablets based on Tegra K1 do not have to use Icera chips. Either the company has given up with the promotion of Icera soft modems and is slowly “covering the shop”, or they simply want to give device manufacturers more flexibility.

The need to use an additional chip to support data transfer over mobile networks for Tegra K1 in some sense can be considered a disadvantage, because the same Qualcomm There have long been competitive systems-on-a-chip with built-in LTE modules. A single-chip system with built-in support for LTE is still more profitable for manufacturers, which also affects the cost of solutions. But this still cannot be called too big a drawback, which may well be covered by the most powerful GPU core and CPU performance of the most modern level - for top-end smartphones and tablets, this is much more important.

Apparently, everything is fine with the capabilities and performance of the Tegra K1 chip, the most important and sore point remains - the availability of end mobile devices on the market. Indeed, with almost all previous solutions, NVIDIA was clearly late to the market, which had a particularly sad effect on Tegra 4 and Tegra 4i. However, we have not yet learned how to predict the future, and according to NVIDIA's data, the release of mobile solutions using their most modern chips is expected in the first quarter of 2014, which has just begun, and for timely software support, they already provided many software developers with dev kits a few months ago.

And if tablets, smartphones and other devices (who said Shield 2?) based on Tegra K1 will really come out in the first half of next year, according to NVIDIA, then the new product is unlikely to repeat the "success" of its predecessor, which is not widely used in third-party solutions. In addition, there is a possibility that Tegra K1 will not be a replacement for the fourth version of the chip, but will become the top solution in the Tegra line. Perhaps, after the release of Tegra K1, several chips of the Tegra line will be simultaneously produced and used in mobile devices of various classes and purposes. I.e, Tegra 4 and Tegra 4i can continue to be used in simpler and more compact devices. Or maybe several chips with different characteristics will be produced under the Tegra K1 brand - who knows?

In the first half of the year, NVIDIA expects the release of not only tablets and smartphones. Perhaps it will be third-party game consoles, both portable and stationary? However, it is too early to talk about devices from other companies based on the Tegra K1, but this system-on-a-chip will definitely become the basis of the next version of the portable game console. NVIDIA Shield. We assume that in addition to the transition of the second version of the Shield to the Tegra K1 chip, the company's console novelty may also get a larger display, because the device is rather big and the first version has bezels around the screen. It is quite possible to reduce them by accommodating a screen with a size from 5.5 to 6 inches and FullHD resolution, and it will be a well-founded upgrade.

NVIDIA also plans to continue the practice of releasing reference tablets Tegra note. You probably know that these 7-inch tablets with Tegra 4 are already sold under various brand names in different regions of the world: EVGA, PNY, Oysters, ZOTAC, Colorful, XOLO and others. The advantages of these mobile devices: powerful stuffing, timely release of firmware updates and relatively low cost.

Current Chassis Tegra Note 7 became the basis for an improved reference tablet based on Tegra K1, the new version of which looks the same in appearance, but has a screen raised to 1920×1200 resolution and the amount of RAM equal to 4 GB. At a press event at the NVIDIA office, we made sure that these improved versions of Tegra Note exist and work (in the enlarged image you can see the labels for all the software mentioned in the text of the article):

Together with our readers, we hope that with mass production and market launch, NVIDIA will do much better this time than previous Tegra models, which were often late with this. So far, it is not entirely clear on whom the mass entry of devices with Tegra K1 to the market will depend, or on production capabilities. TSMC, either from the NVIDIA. But at least it seems that NVIDIA is doing its best to eliminate the delays that they are able to influence - after all, the release of updated shield and note depends almost exclusively on TSMC and NVIDIA.

P.S. After the work on the material was finished, official information from NVIDIA appeared that two versions of Tegra K1 would be released, compatible with each other by pins: with 32-bit 4 + 1 Cortex-A15 cores and two 64-bit cores of our own design, based on the ARMv8 architecture and known to us under the code name Denver. Other highlights of the new cores include an even higher clock speed of up to 2.5 GHz and larger cache memory. In this version, Tegra will become an even more powerful player in the market, but... also only if the chip comes out not too late. So far, there are only estimated dates for its release, designated as the second half of this year, but NVIDIA already has the first working chips and they were produced quite recently.

Share: