Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Game Development

93 Articles
article-image-godot-3-1-released-with-improved-c-support-opengl-es-2-0-renderer-and-much-more
Savia Lobo
15 Mar 2019
4 min read
Save for later

Godot 3.1 released with improved C# support, OpenGL ES 2.0 renderer and much more!

Savia Lobo
15 Mar 2019
4 min read
On 13 March, Wednesday, the Godot developers announced the release of a new version of the open source 2D and 3D cross-platform compatible game engine, Godot 3.1. This new version includes the much-requested improvements to the major release, Godot 3.0. Improved features in the Godot 3.1 OpenGL ES 2.0 renderer Rendering is done entirely on sRGB color space (the GLES3 renderer uses linear color space). This is much more efficient and compatible, but it means that HDR will not be supported. Some advanced PBR features such as subsurface scattering are not supported. Unsupported features will not be visible when editing materials. Some shader features will not work and throw an error when used. Also, some post-processing effects are not present either. Unsupported features will not be visible when editing environments. GPU-based Particles will not work as there is no transform feedback support. Users can use the new CPUParticles node instead. Optional typing in GDScript This has been one of the most requested Godot features from day one. GDScript allows to write code in a quick way within a controlled environment. The code editor will now show which lines are safe with a slight highlight of the line number. This will be vital in the future to optimize small pieces of code which may require more performance. Revamped Inspector The Godot inspector has been rewritten from scratch. It includes features such as proper vector field editing, sub-inspectors for resource editing, better custom visual editors for many types of objects, very comfortable to use spin-slider controls, better array and dictionary editing and many more features. Kinematicbody2d (and 3d) improvements Kinematic bodies are among Godot's most useful nodes. They allow creating very game-like character motion with little effort. For Godot 3.1 they have been considerably improved with: Support for snapping the body to the floor. Support for RayCast shapes in kinematic bodies. Support for synchronizing kinematic movement to physics, avoiding a one-frame delay. New Axis Handling system Godot 3.1 uses the novel concept of "action strength". This approach allows using actions for all use cases and it makes it very easy to create in-game customizable mappings and customization screens. Visual Shader Editor This was a pending feature to re-implement in Godot 3.0, but it couldn't be done in time back then. The new version has new features such as PBR outputs, port previews, and easier to use mapping to inputs. 2D Meshes Godot now supports 2D meshes, which can be used from code or converted from sprites to avoid drawing large transparent areas. 2D Skeletons It is now possible to create 2D skeletons with the new Skeleton2D and Bone2D nodes. Additionally, Polygon2D vertices can be assigned bones and weight painted. Adding internal vertices for better deformation is also supported. Constructive Solid Geometry (CSG) CSG tools have been added for fast level prototyping, allowing generic primitives and custom meshes to be combined via boolean operations to generate more complex shapes. They can also become colliders to test together with physics. CPU-based particle system Godot 3.0 integrated a GPU-based particle system, which allows emitting millions of particles at little performance cost. The developers added alternative CPUParticles and CPUParticles2D nodes that perform particle processing using the CPU (and draw using the MultiMesh API). These nodes open the window for adding features such as physics interaction, sub-emitters or manual emission, which are not possible using the GPU. More VCS-friendly The new 3.1 version includes some very requested enhancements such as: Folded properties are no longer saved in scenes. This avoids unnecessary history pollution. Non-modified properties are no longer saved. This reduces text files considerably and makes history even more readable. Improved C# support In Godot 3.1, C# projects can be exported to Linux, macOS, and Windows. Support for Android, iOS, and HTML5 will come soon. To know about other improvements in detail, visit the changelog or the official website. Microsoft announces Game stack with Xbox Live integration to Android and iOS OpenAI introduces Neural MMO, a multiagent game environment for reinforcement learning agents Google teases a game streaming service set for Game Developers Conference
Read more
  • 0
  • 0
  • 4311

article-image-google-research-football-environment-a-reinforcement-learning-environment-for-ai-agents-to-master-football
Amrata Joshi
10 Jun 2019
4 min read
Save for later

Google Research Football Environment: A Reinforcement Learning environment for AI agents to master football

Amrata Joshi
10 Jun 2019
4 min read
Last week, Google researchers announced the release of  Google Research Football Environment, a reinforcement learning environment where agents can master football. This environment comes with a physics-based 3D football simulation where agents control either one or all football players on their team, they learn how to pass between them, and further manage to overcome their opponent’s defense to score goals. The Football Environment offers a game engine, a set of research problems called Football Benchmarks and Football Academy and much more. The researchers have released a beta version of open-source code on Github to facilitate the research. Let’s have a brief look at each of the elements in the Google Research Football Environment. Football engine: The core of the Football Environment Based on the modified version of Gameplay Football, the Football engine simulates a football match including fouls, goals, corner and penalty kicks, and offsides. The engine is programmed in C++,  which allows it to run with GPU as well as without GPU-based rendering enabled. The engine allows learning from different state representations that contain semantic information such as the player’s locations and learning from raw pixels. The engine can be run in both stochastic mode as well as deterministic mode for investigating the impact of randomness. The engine is also compatible with OpenAI Gym API. Read Also: Create your first OpenAI Gym environment [Tutorial] Football Benchmarks: Learning from the actual field game The researchers propose a set of benchmark problems for RL research based on the Football Engine with the help of Football Benchmarks. These benchmarks highlight the goals such as playing a “standard” game of football against a fixed rule-based opponent. The researchers have provided three versions, the Football Easy Benchmark, the Football Medium Benchmark, and the Football Hard Benchmark, which differ only in the strength of the opponent. They also provide benchmark results for two state-of-the-art reinforcement learning algorithms including DQN and IMPALA that can be run in multiple processes on a single machine or concurrently on many machines. Image Source: Google’s blog post These results indicate that the Football Benchmarks are research problems that vary in difficulties. According to the researchers, the Football Easy Benchmark is suitable for research on single-machine algorithms and Football Hard Benchmark is challenging for massively distributed RL algorithms. Football Academy: Learning from a set of difficult scenarios   Football Academy is a diverse set of scenarios of varying difficulty that allow researchers to look into new research ideas and allow testing of high-level concepts. It also provides a foundation for investigating curriculum learning, research ideas, where agents can learn harder scenarios. The official blog post states, “Examples of the Football Academy scenarios include settings where agents have to learn how to score against the empty goal, where they have to learn how to quickly pass between players, and where they have to learn how to execute a counter-attack. Using a simple API, researchers can further define their own scenarios and train agents to solve them.” Users are giving mixed reaction to this news as some find nothing new in Google Research Football Environment. A user commented on HackerNews, “I guess I don't get it... What does this game have that SC2/Dota doesn't? As far as I can tell, the main goal for reinforcement learning is to make it so that it doesn't take 10k learning sessions to learn what a human can learn in a single session, and to make self-training without guiding scenarios feasible.” Another user commented, “This doesn't seem that impressive: much more complex games run at that frame rate? FIFA games from the 90s don't look much worse and certainly achieved those frame rates on much older hardware.” While a few others think that they can learn a lot from this environment. Another comment reads, “In other words, you can perform different kinds of experiments and learn different things by studying this environment.” Here’s a short YouTube video demonstrating Google Research Football. https://youtu.be/F8DcgFDT9sc To know more about this news, check out Google’s blog post. Google researchers propose building service robots with reinforcement learning to help people with mobility impairment Researchers propose a reinforcement learning method that can hack Google reCAPTCHA v3 Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias  
Read more
  • 0
  • 0
  • 4307

article-image-whats-new-in-unreal-engine-4-19
Sugandha Lahoti
16 Apr 2018
3 min read
Save for later

What's new in Unreal Engine 4.19?

Sugandha Lahoti
16 Apr 2018
3 min read
The highly anticipated Unreal Engine 4.19 is now generally available. This release hosts a new Live Link plugin, improvements to Sequencer, new Dynamic Resolution feature, and multiple workflow and usability improvements. In addition to all of these major updates, this release also features a massive 128 improvements based on queries submitted by the Unreal Engine developers community on GitHub. Unreal Engine 4.19 allows game developers to know exactly what their finished game will look like at every step of the development process. This update comes with three major goals: Let developers step inside the creative process. Build gaming worlds that run faster than ever before. Give developers full control. Here's a list of all the major features and what they bring to the game development process: New Unreal Engine 4.19 features Live Link Plugin Improvements The Maya Live Link Plugin is now available and can be used to establish a connection between Maya and UE4 to preview changes in real-time. Virtual Subjects are added to the Live Link. It can also be used with Motion Controllers. Live Link Sources can now define their own custom settings. Virtual Initialization function and Update DeltaTime parameter are also added to Live Link Retargeter API. Source: Unreal Engine blog Unified Unreal AR framework The Unreal Augmented Reality Framework provides a unified framework for building Augmented Reality (AR) apps for both Apple and Google handheld platforms using a single code path. Features include functions supporting Alignment, Light Estimation, Pinning, Session State, Trace Results, and Tracking. Temporal upsampling The new upscaling method, Temporal Upsample performs two separate screen percentages used for upscaling: Primary screen percentage that by default will use the spatial upscale pass as before; Secondary screen percentage that is a static, spatial only upscale at the very end of post-processing, before the UI draws. Dynamic resolution Dynamic Resolution adjusts the resolution to achieve the desired frame rate, for games on PlayStation 4 and Xbox One. It uses a heuristic to set the primary screen percentage based on the previous frames GPU workload. Source: Unreal Engine blog Physical light units All light units are now defined using physically based units. The new light unit property can be edited per light, changing how the engine interprets the intensity property when doing lighting related computations. Source: Unreal Engine blog Landscape rendering optimization The Landscape level of detail (LOD) system now uses screen size to determine detail for a component, similar to how the Static Mesh LOD system works. Starting from this release, all existing UE4 content that supports SteamVR is now compatible with HTC's newly-announced Vive Pro. These are just a select few updates to the Unreal Engine. The full list of release notes is available on the Unreal Engine forums.
Read more
  • 0
  • 0
  • 4303
Visually different images

article-image-valves-steam-play-beta-uses-proton-a-modified-wine-allowing-linux-gamers-to-play-windows-games
Bhagyashree R
25 Aug 2018
2 min read
Save for later

Valve’s Steam Play Beta uses Proton, a modified WINE, allowing Linux gamers to play Windows games

Bhagyashree R
25 Aug 2018
2 min read
To provide compatibility with a wide range of Windows-only games to all Linux users, a Beta version of the new and improved Steam Play is now available. It uses Proton, a modified distribution of Wine, to allow games which are exclusive to Windows to run on Linux and macOS operating systems. Proton is an open source tool, allowing advanced users to alter the code to make their own local builds. The included improvements to Wine have been designed and funded by Valve, in a joint development effort with CodeWeavers. In order to identify games that currently work great in this compatibility environment and solve the issues, if any, they are testing the entire Steam catalog. The list of games that they are enabling with this Beta release include: Beat Saber, Bejeweled 2 Deluxe, Doki Doki Literature Club!, DOOM, Fallout Shelter, FATE, FINAL FANTASY VI, and many more. Using Steam Play the gamers can purchase the games once and play anywhere. Whether you have purchased your Steam Play enabled game on a Mac, Windows, or Linux, you will be able to play on the other platform free of charge. What are the improvements introduced? You can now install and run Windows games with no Linux version currently available, directly from the Linux Steam client, complete with native Steamworks and OpenVR support. Improved game compatibility and reduced performance impact is facilitated by DirectX 11 and 12 whose implementations are now based on Vulkan. The support for fullscreen games is improved allowing them to seamlessly stretch to the desired display without interfering with the native monitor resolution or requiring the use of a virtual desktop. The support for game controller is improved enabling games to automatically recognize all controllers supported by Steam. Improved performance for multi-threaded games as compared to vanilla Wine. They have mentioned that there could be a performance difference for games where graphics API translation is required, but there is no fundamental reason for a Vulkan title to run any slower. You can find out more about the Stream Play Beta, the full list of supported games, and how Proton works in the Steam post. Facebook launched new multiplayer AR games in Messenger Meet yuzu – an experimental emulator for the Nintendo Switch What’s got game developers excited about Unity 2018.2?
Read more
  • 0
  • 0
  • 4297

article-image-blender-2-80-released-with-new-ui-interface-eevee-real-time-renderer-grease-pencil
Bhagyashree R
31 Jul 2019
3 min read
Save for later

Blender 2.80 released with a new UI interface, Eevee real-time renderer, grease pencil, and more

Bhagyashree R
31 Jul 2019
3 min read
After about three long years of development, the most awaited Blender version, Blender 2.80 finally shipped yesterday. This release comes with a redesigned UI interface, workspaces, templates, Eevee real-time renderer, grease pencil, and much more. The user interface is revamped with a focus on usability and accessibility Blender’s user interface is revamped with a better focus on usability and accessibility. It has a fresh look and feel with a dark theme and modern icon set. The icons change color based on the theme you select so that they are readable against bright or dark backgrounds. Users can easily access the most used features via the default shortcut keys or map their own. You will be able to fully use Blender with a one-button trackpad or pen input as it now supports the left mouse button by default for selection. It provides a new right-click context menu for quick access to important commands in the given context. There is also a Quick Favorites popup menu where you can add your favorite commands. Get started with templates and workspaces You can now choose from multiple application templates when starting a new file. These include templates for 3D modeling, shading, animation, rendering, grease pencil based 2D drawing and animation, sculpting, VFX, video editing, and the list goes on. Workspaces give you a screen layout for specific tasks like modeling, sculpting, animating, or editing. Each template that you choose will provide a default set of workspaces that can be customized. You can create new workspaces or copy from the templates as well. Completely rewritten 3D Viewport Blender 2.8’s completely rewritten 3D viewport is optimized for modern graphics and offers several new features. The new Workbench render engine helps you get work done in the viewport for tasks like scene layout, modeling, and sculpting. Viewport overlays allow you to decide which utilities are visible on top of the render. The LookDev new shading mode allows you to test multiple lighting conditions (HDRIs) without affecting the scene settings. The smoke and fire simulations are overhauled to make them look as realistic as possible. Eevee real-time renderer Blender 2.80 has a new physically-based real-time renderer called Eevee. It performs two roles: a renderer for final frames and the engine driving Blender's real-time viewport for creating assets. Among the various features it supports volumetrics, screen-space reflections and refractions, depth of field, camera motion blur, bloom, and much more. You can create Eevee materials using the same shader nodes as Cycles, which makes it easier to render existing scenes. 2D animation with Grease Pencil Grease Pencil enables you to combine 2D and 3D worlds together right in the viewport. With this release, it has now become a “full 2D drawing and animation system.” It comes with a new multi-frame edition mode with which you can change and edit several frames at the same time. It has a build modifier to animate the drawings similar to the Build modifier for 3D objects. There are many other features added to grease pencil. Watch this video to get a glimpse of what you can create with it: https://www.youtube.com/watch?v=JF3KM-Ye5_A Check out for more features in Blender 2.80 on its official website. Blender celebrates its 25th birthday! Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects  
Read more
  • 0
  • 0
  • 4139

article-image-unite-berlin-2018-keynote-unity-partners-with-google-launches-ml-agents-toolkit-0-4-project-mars-and-more
Sugandha Lahoti
20 Jun 2018
5 min read
Save for later

Unite Berlin 2018 Keynote: Unity partners with Google, launches Ml-Agents ToolKit 0.4, Project MARS and more

Sugandha Lahoti
20 Jun 2018
5 min read
Unite Berlin 2018, the Unity annual developer conference, kicked off on June 19’ 2018. This three-day extravaganza will take you through a thrilling ride filled with new announcements, sessions, and workshops from the amazing creators of Unity. It’s a place to develop, network, and participate with artists, developers, filmmakers, researchers, storytellers and other creators. Day 1 was inaugurated with the promising Keynote, presented by John Riccitiello, CEO of Unity Technologies. It featured previews of upcoming unity technology, most prominently Unity’s alliance with Google Cloud to help developers build connected games. Let’s take a look at what was showcased. Connected Games with Unity and Google Cloud Unity and Google Cloud have collaborated for helping developers create real-time multiplayer games. They are building a suite of managed services and tools to help developers, test, and run connected experiences while offloading the hard work of quickly scaling game servers to Google Cloud. Games can be easily scaled to meet the needs of the players. Game developers can harness the massive power of Google cloud without having to be a cloud expert. Here’s what Google Cloud with Unity has in store: Game-Server Hosting: Streamlined resources to develop and scale hosted multiplayer games. Sample FPS: A production-quality sample project of a real-time multiplayer game. New ECS Networking Layer: Fast, flexible networking code that delivers performant multiplayer by default. Unity ML-Agents Toolkit v0.4 A new version of Unity ML-Agents Toolkit was also announced at Unite Berlin. The v0.4 toolkit hosts multiple updates as requested by the Unity community. Game developers now have the option to train environments directly from the Unity editor, rather than as built executables. Developers can simply launch the learn.py script, and then press the “play” button from within the editor to perform training. They have also launched a set of two new challenging environments, Walker and Pyramids. Walker is physics-based humanoid ragdoll and Pyramids is a complex sparse-reward environment. There are also algorithmic improvements in reinforcement learning. Agents are now trained to learn to solve tasks that were previously learned with great difficulty. Unity is also partnering with Udacity to launch Deep Reinforcement Learning Nanodegree to help students and professionals gain a deeper understanding of reinforcement learning. Augmented Reality with Project MARS Unity has also announced their Project MARS, a Mixed and Augmented Reality studio, that will be provided as a Unity extension. This studio will require almost little-to-no custom coding and will allow game developers to build AR and MR applications that intelligently interact with any real-world environment, with little-to-no custom coding. Unite Berlin - AR Keynote Reel MARS will include abstract layers for object recognition, location, and map data. It will have sample templates with simulated rooms, for testing against different environments, inside the editor.  AR-specific gizmos will be provided to easily define spatial conditions like plane size, elevation, and proximity without requiring code or precise measurements. It will also have elements such as face masks, to avatars, to entire rooms of digital art. Project MARS will be coming to Unity as an experimental package later this year. Unity has also unveiled a Facial AR Remote Component. Powered by Augmented Reality, this component can perform and capture animated characters, allowing filmmakers and CGI developers to shoot CG content with body movement, just like you would with live action. Kinematica - Machine Learning powered Animation system Unity also showcased their AI research by announcing Kinematica, an all-new ML-powered animation system. Kinematica overpowers traditional animation systems which generally require animators to explicitly define transitions. Kinematica does not have any superimposed structure, like graphs or blend trees. It generates smooth transitions and movements by applying machine learning to any data source. Game developers and animators no longer need to manually map out animation graphs. Unite Berlin 2018 - Kinematica Demo Kinematica decides in real time how to combine data clips from a single library into a sequence that matches the controller input, the environment content, and the gameplay requests. As with Project MARS, Kinematica will also be available later this year as an experimental package. New Prefab workflows The entire Prefab systems have been revamped with multiple improvements. This improved Prefab workflow is now available as a preview build. New additions include Prefab Mode, prefab variance, and nested prefabs. Prefab Mode allows faster, efficient, and safer editing of Prefabs in an isolated mode, without adding them to the actual scene. Developers can now edit the model prefabs, and the changes are propagated to all prefab variants. With Nested prefabs, teams can work on different parts of the prefab and then come together for the final asset. Predictive Personalized Placements Personalized placements bring the best of both worlds for players and the commercial business. With this new feature, game developers can create tailor-made game experiences for each player. This feature runs on an engine which is powered by predictive analytics. This prediction engine determines what to show to each player based on what will drive the highest engagement and lifetime value. This includes ad, an IAP promotion, a notification of a new feature, or a cross-promotion. And the algorithm will only get better with time. These were only a select few of the announcements presented in Unity Berlin Keynote. You can watch the full video on YouTube. Details on other sessions, seminars, and activities are available on the Unite website. GitHub for Unity 1.0 is here with Git LFS and file locking support Unity announces a new automotive division and two-day Unity AutoTech Summit Put your game face on! Unity 2018.1 is now available
Read more
  • 0
  • 0
  • 4015
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-introducing-zink-an-opengl-implementation-on-top-of-vulkan
Amrata Joshi
02 Nov 2018
3 min read
Save for later

Introducing Zink: An OpenGL implementation on top of Vulkan

Amrata Joshi
02 Nov 2018
3 min read
Erik Kusma Faye-Lund, a graphics programmer, introduced Zink on Wednesday. Zinc is an OpenGL implementation on top of Vulkan. It is a Mesa Gallium driver that supports OpenGL implementation in Mesa to provide hardware-accelerated OpenGL when only a Vulkan driver is available. Currently, Zink is only available as a source code, distro-packages aren’t available yet. It has only been tested on Linux. To build Zink, one needs to have Git, Vulkan headers and libraries, Meson and Ninja. Also, one needs to build dependencies to compile Mesa. Erik says, “And most importantly, we are not a conformant OpenGL implementation. I’m not saying we will never be, but as it currently stands, we do not do conformance testing, and as such we neither submit conformance results to Khronos.” What Zink may include 1. Just one API OpenGL is a big API and is well-established as a requirement for applications and desktop compositors. But since the release of Vulkan, there are two APIs for essentially the same hardware functionality but both are important. As the software-world is working hard to implement Vulkan support everywhere, this is leading to complexity. One would only require things like desktop compositors to support one API in the future. There might be a future where OpenGL’s role could purely be one of legacy application compatibility. Maybe Zink can help in making the future better! 2. Lessen the workload of GPU drivers Everyone wants less amount of code to maintain for legacy hardware but the drivers to maintain are growing rapidly. Also, new drivers have been written for old hardware. If the hardware is capable of supporting Vulkan, it could be easier to only support Vulkan “natively”, and do OpenGL through Zink. There aren’t infinite programmers that can maintain every GPU driver forever. But maybe with Zink, driver-support might get better and easier. 3.  Zink comes with benefits Since Zink is implemented as a Gallium driver in Mesa, there are some side-benefits that come “for free”. For instance, projects like Gallium Nine or Clover could, in theory, may work on top of the i965 Vulkan driver through Zink in the future. In the coming years, Zink might also act as a cooperation-layer between OpenGL and Vulkan code in the same application. 4. Zink could be used as a closed-source Vulkan driver Zink might also run smoothly on top of a closed-source Vulkan driver and still get proper window system integration. What does Zink require? Currently, Zink requires a Vulkan 1.0 implementation and the following extensions: VK_KHR_maintenance1: This extension is required for the viewport flipping. VK_KHR_external_memory_fd : This extension is required for getting the rendered result on screen. Additionally, Erick has also shared a list of features that Zink doesn’t support, which include: Currently, glPointSize() is not supported. Though writing to gl_PointSize from the vertex shader does work. The texture borders are currently black due to Vulkan’s lack of arbitrary border-color support. Currently, no control-flow is supported in the shaders. There is no GL_ALPHA_TEST and glShadeModel(GL_FLAT) support yet. It would be interesting to see how Zink turns out when the features go live! Read more about this news on Kusma’s official website. Valve’s Steam Play Beta uses Proton, a modified WINE, allowing Linux gamers to play Windows games UI elements and their implementation Game Engine Wars: Unity vs Unreal Engine
Read more
  • 0
  • 0
  • 3997

article-image-deepmind-ais-alphastar-achieves-grandmaster-level-in-starcraft-ii-with-99-8-efficiency
Vincy Davis
04 Nov 2019
5 min read
Save for later

DeepMind AI’s AlphaStar achieves Grandmaster level in StarCraft II with 99.8% efficiency

Vincy Davis
04 Nov 2019
5 min read
Earlier this year in January, Google’s DeepMind AI AlphaStar had defeated two professional players, TLO and MaNa, at StarCraft II, a real-time strategy game. Two days ago, DeepMind announced that AlphaStar has now achieved the highest possible online competitive ranking, called Grandmaster level, in StarCraft II. This makes AlphaStar the first AI to reach the top league of a widely popular game without any restrictions. AplhaStar used the multi-agent reinforcement learning technique and rated above 99.8% of officially ranked human players. It was able to achieve the Grandmaster level for all the three StarCraft II races - Protoss, Terran, and Zerg. The DeepMind researchers have published the details of AlphaStar in the paper titled, ‘Grandmaster level in StarCraft II using multi-agent reinforcement learning’. https://twitter.com/DeepMindAI/status/1189617587916689408 How did AlphaStar achieve the Grandmaster level in StarCraft II? The DeepMind researchers were able to develop a robust and flexible agent by understanding the potential and limitations of open-ended learning. This helped the researchers to make AlphaStar cope with complex real-world domains. “Games like StarCraft are an excellent training ground to advance these approaches, as players must use limited information to make dynamic and difficult decisions that have ramifications on multiple levels and timescales,” states the blog post. The StarCraft II video game requires players to balance high-level economic decisions with individual control of hundreds of units. When playing this game, humans are under physical constraints which limits their reaction time and their rate of actions. Accordingly, AphaStar was also imposed with these constraints, thus making it suffer from delays due to network latency and computation time. In order to limit its actions per minute (APM), AphaStar’s peak statistics were kept substantially lower than those of humans. To align with the standard human movement, it also had a limited viewing of the portion of the map, AlphaStar could register only a limited number of mouse clicks and had only 22 non-duplicated actions to play every five seconds. AlphaStar uses a combination of general-purpose techniques like neural network architectures, imitation learning, reinforcement learning, and multi-agent learning. The games were sampled from a publicly available dataset of anonymized human replays, which were later trained to predict the action of every player. These predictions were then used to procure a diverse set of strategies to reflect the different modes of human play. Read More: DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players Dario “TLO” WÜNSCH, a professional starcraft II player says, “I’ve found AlphaStar’s gameplay incredibly impressive – the system is very skilled at assessing its strategic position, and knows exactly when to engage or disengage with its opponent. And while AlphaStar has excellent and precise control, it doesn’t feel superhuman – certainly not on a level that a human couldn’t theoretically achieve. Overall, it feels very fair – like it is playing a ‘real’ game of StarCraft.” According to the paper, AlphaStar had the 1026 possible actions available at each time step, thus it had to make thousands of actions before learning if it has won or lost the game. One of the key strategies behind AlphaStar’s performance was learning human strategies. This was necessary to ensure that the agents keep exploring those strategies throughout self-play. The researchers say, “To do this, we used imitation learning – combined with advanced neural network architectures and techniques used for language modeling – to create an initial policy which played the game better than 84% of active players.” AlphaStar also uses a latent variable to encode the distribution of opening moves from human games. This helped AlphaStar to preserve the high-level strategies and enabled it to represent many strategies within a single neural network. By training the advances in imitation learning, reinforcement learning, and the League, the researchers were able to train AlphaStar Final, the agent that reached the Grandmaster level at the full game of StarCraft II without any modifications. AlphaStar used a camera interface, which helped it get the exact information that a human player would receive. All the interface and restrictions faced by AlphaStar were approved by a professional player. Finally, the results indicated that general-purpose learning techniques can be used to scale AI systems to work in complex and dynamic environments involving multiple actors. AlphaStar’s great feat has got many people excited about the future of AI. https://twitter.com/mickdooit/status/1189604170489315334 https://twitter.com/KaiLashArul/status/1190236180501139461 https://twitter.com/JoshuaSpanier/status/1190265236571459584 Interested readers can read the research paper to check AlphaStar’s performance. Head over to DeepMind’s blog for more details. Google AI introduces Snap, a microkernel approach to ‘Host Networking’ Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs
Read more
  • 0
  • 0
  • 3966

article-image-epic-games-at-gdc-announces-epic-megagrants-rtx-powered-ray-tracing-demo-and-free-online-services-for-game-developers
Natasha Mathur
22 Mar 2019
4 min read
Save for later

Epic Games announces: Epic MegaGrants, RTX-powered Ray tracing demo, and free online services for game developers

Natasha Mathur
22 Mar 2019
4 min read
Epic Games, an American video game and software development company, made a series of announcements, earlier this week. These include: Epic Game’s CEO, Tim Sweeney to offer $100 million in grants to game developers Stunning RTX-powered Ray-Tracing Demo named Troll Epic’s free Online Services launch for game developers Epic MegaGrants: $100 million funds to Game Developers Tim Sweeney, CEO, Epic Games Inc, announced earlier this week that he will be offering $100 million in grants to game developers to boost the growth of the gaming industry. Sweeney made the announcement during a presentation on Wednesday at the Game Developers Conference (GDC). GDC is the world's largest professional game industry event that ended yesterday in San Francisco. Epic Games also created a $5 million fund for grants that have been disbursed over the last three years. Now Epic Games is off to build a new fund called Epic MegaGrants. These are “no-strings-attached” grants, meaning that they don’t consist of any contracts requiring game developers to do anything for Epic. All that game developers need to do is apply for the grants, create an innovative project, and if the Epic’s judges find it worthy, they’ll offer them the funds. “There are no commercial hooks back to Epic. You don’t have to commit to any deliverables. This is our way of sharing Fortnite’s unbelievable success with as many developers as we can”, said Sweeney. Troll: a Ray Tracing Unreal Engine 4 Demo Another eye-grabbing moment at GDC this year was a “visually stunning” ray tracing demo revealed by Goodbye Kansas and Deep Forest Films called "Troll”. Troll was rendered in real time using Unreal Engine 4.22 ray tracing and camera effects. And powered by a NVIDIA’s single GeForce RTX 2080 Ti graphics card.  Troll is visually inspired by Swedish painter and illustrator John Bauer, whose illustrations are famous for Swedish folklore and fairy tales anthology known as ‘Among Gnomes and Trolls’. https://www.youtube.com/watch?v=Qjt_MqEOcGM                                                            Troll “Ray tracing is more than just reflections — it’s about all the subtle lighting interactions needed to create a natural, beautiful image. Ray tracing adds these subtle lighting effects throughout the scene, making everything look more real and natural,” said Nick Penwarden, Director of Engineering for Unreal Engine at Epic Games. NVIDIA team states in a blog post that Epic Games has been working to integrate RTX-accelerated ray tracing into its popular Unreal Engine 4. In fact, Unreal Engine 4.22 will have the support for new Microsoft DXR API for real-time ray tracing. Epic’s free online services launch for game developers Epic Games also announced the launch of free tools and services, part of the Epic Online Services, which was announced in December 2018. The SDK is available via the new developer portal for immediate download and use. SDK currently supports Windows, Mac, and Linux. Moreover, the SDK, as a part of the release, provides support for two free services, namely, game analytics and player ticketing. Game analytics help developers understand player behavior. It features DAU (Daily active users), MAU (Monthly active users), retention, new player counts, game launch counts, online user count, and more. The ticketing system connects players directly with developers and allows them to report bugs or other problems. These two services will continue to evolve along with the rest of Epic Online Services (EOS) to offer infrastructure and tools required by the developers to launch, operate, and scale the high-quality online games. Epic games will also be offering additional free services throughout 2019, including player data storage, player reports, leaderboards & stats, player identity, player inventory, matchmaking etc. “We are committed to developing EOS with features that can be used with any engine, any store and that can support any major platform...these services will allow developers to deliver cross-platform gameplay experiences that enable players to enjoy games no matter what platform they play on”, states the Epic Games team. Fortnite server suffered a minor outage, Epic Games was quick to address the issue Epic games CEO calls Google “irresponsible” for disclosing the security flaw in Fortnite Android installer Fortnite creator Epic games launch Epic games store where developers get 88% of revenue earned
Read more
  • 0
  • 0
  • 3906

article-image-valve-announces-half-life-alyx-its-first-flagship-vr-game
Savia Lobo
19 Nov 2019
3 min read
Save for later

Valve announces Half-Life: Alyx, its first flagship VR game

Savia Lobo
19 Nov 2019
3 min read
Yesterday, Valve Corporation, the popular American video game developer, announced the Half-Life: Alyx, the first new game in the popular Half-Life series in over a decade. The company tweeted that it will unveil the first look on Thursday, 21st November 2019, at 10 am Pacific Time. https://twitter.com/valvesoftware/status/1196566870360387584 Half-Life: Alyx, a brand-new game in the Half-Life universe, is designed exclusively for PC virtual reality systems (Valve Index, Oculus Rift, HTC Vive, Windows Mixed Reality). Talking about Valve’s history in PC games, it has created some of the most influential and critically games ever made. However, “Valve has famously never finished either of its Half-Life supposed trilogies of games. After Half-Life and Half-Life 2, the company created Half-Life: Episode 1 and Half-Life: Episode 2, but no third game in the series,” the Verge reports. Ars Technica reveals, “The game's name confirms what has been loudly rumored for months: that you will play this game from the perspective of Alyx Vance, a character introduced in 2004's Half-Life 2. Instead of stepping forward in time, HLA will rewind to the period between the first two mainline Half-Life games.” “A data leak from Valve's Source 2 game engine, as uncovered in September by Valve News Network, pointed to a new control system labeled as the "Grabbity Gloves" in its codebase. Multiple sources have confirmed that this is indeed a major control system in HLA,” Ars Technica claims. These Grabbity gloves can also be described as ‘Magnet gloves’, which allow pointing out and attracting distant objects to your hands. Valve has already announced plans to support all major VR PC systems for its next VR game, and these new gloves seem like the right system to scale to whatever controllers that would come to VR. Many gamers are excited to check out this Half-life version and are also looking forward to whether the company really stands up to what it says. A user on Hacker News commented, “Wonder what Valve is doubling down with this title? It seems like the previous games were all ground-breaking narratives, but with most of the storytellers having left in the last few years, I'd be curious to see what makes this different than your standard VR games.” Another user on Hacker News commented, “From the tech side it was the heavy, and smart, use of scripting that made HL1 stand out. With HL2 it was the added physics engine trough the change to Source, back then that used to be a big deal and whole gameplay mechanics revolve around that (gravity gun). In that context, I do not really consider it that surprising for the next HL project to focus on VR because even early demos of that combination looked already very promising 5 years ago” We will update this space after the Half-Life: Alyx is unveiled on Thursday. To know more about the announcement in detail, read Ars Technica’s complete coverage. Valve reveals new Index VR Kit with detail specs and costs upto $999 Why does Oculus CTO John Carmack prefer 2D VR interfaces over 3D Virtual Reality interfaces? Oculus Rift S: A new VR with inside-out tracking, improved resolution and more!
Read more
  • 0
  • 0
  • 3902
article-image-now-you-can-play-assassins-creed-in-chrome-thanks-to-googles-new-game-streaming-service
Natasha Mathur
03 Oct 2018
2 min read
Save for later

Now you can play Assassin’s Creed in Chrome thanks to Google’s new game streaming service

Natasha Mathur
03 Oct 2018
2 min read
Google announced a new experimental game streaming service, namely, Project Stream, earlier this week. Google calls this project a “technical test” and has partnered up with Ubisoft, one of the most popular video game publishers, to stream their upcoming Assassin’s Creed Odyssey via Project Stream on Chrome. “We’ve been working on Project Stream, a technical test to solve some of the biggest challenges of streaming. For this test, we’re going to push the limits with one of the most demanding applications for streaming—a blockbuster video game,” writes Catherine Hsiao on the announcement blog post. Google points out that their major goal with Project Stream is for it to effectively stream AAA game titles. This is because the Google team is inspired by the technology that goes behind AAA video games. Additionally, working with a AAA game title is more challenging as opposed to working with a game that comprises less intense graphics. “Every pixel is powered by an array of real-time rendering technology, artistry, visual effects, animation, simulation, physics, and dynamics. We’re inspired by the game creators who spend years crafting these amazing worlds, adventures, and experiences, and we’re building technology that we hope will support and empower that creativity,” states the post.   Project Stream  With Project Stream, Google is working to ensure that latency stays minimal and the graphics of the game are not compromised when using its streaming service. “The idea of streaming such graphically-rich content that requires near-instant interaction between the game controller and the graphics on the screen poses a number of challenges.  When streaming TV or movies, consumers are comfortable with a few seconds of buffering at the start, but streaming high-quality games requires latency measured in milliseconds, with no graphics degradation,” adds Google. Google has made limited spaces available for users to try Project Stream, starting October 5. If you want to participate then you can apply on Project Stream’s official website. Participation is only open for the U.S. residents who are 17 years or older. For more information, check out the official announcement. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable Google announces new Artificial Intelligence features for Google Search on its 20th birthday Google announces the Beta version of Cloud Source Repositories
Read more
  • 0
  • 0
  • 3852

article-image-unity-releases-ml-agents-toolkit-v0-5-with-gym-interface-a-new-suite-of-learning-environments
Sugandha Lahoti
12 Sep 2018
2 min read
Save for later

Unity releases ML-Agents toolkit v0.5 with Gym interface, a new suite of learning environments

Sugandha Lahoti
12 Sep 2018
2 min read
In their commitment to become the go-to platform for Artificial Intelligence, Unity has released a new version of their ML-Agents Toolkit.  ML-Agents toolkit v0.5 comes with more flexible action specification, a Gym interface for researchers to more easily integrate ML-Agents environments into their training workflows, and a new suite of learning environments replicating some of the Continuous Control benchmarks used in Deep Reinforcement Learning. They have also released a research paper on ML-Agents which the Unity platform has titled “Unity: A General Platform for Intelligent Agent.” Changes to the ML-Agents toolkit v0.5 A lot of changes have been made pertaining to ML-Agents toolkit v0.5. Highlighted changes to repository structure The python folder has been renamed ml-agents. It now contains a python package called mlagents. The unity-environment folder, containing the Unity project, has been renamed UnitySDK. The protobuf definitions used for communication have been added to a new protobuf-definitions folder. Example curricula and the trainer configuration file have been moved to a new config sub-directory. New features New package gym-unity which provides gym interface to wrap UnityEnvironment. The ML-Agents toolkit v0.5 can now run multiple concurrent training sessions with the --num-runs=<n> command line option. Added Meta-Curriculum which supports curriculum learning in multi-brain environments. Action Masking for Discrete Control which makes it possible to mask invalid actions each step to limit the actions an agent can take. Fixes & Performance Improvements Replaced some activation functions to swish. Visual Observations use PNG instead of JPEG to avoid compression losses. Improved python unit tests. Multiple training sessions are available on single GPU. Curriculum lessons are now tracked correctly. Developers can now visualize value estimates when using models trained with PPO from Unity with GetValueEstimate(). It is now possible to specify which camera the Monitor displays to. Console summaries will now be displayed even when running inference mode from python. Minimum supported Unity version is now 2017.4. You can read all about the new version of ML-Agents Toolkit on the Unity Blog. Unity releases ML-Agents v0.3: Imitation Learning, Memory-Enhanced Agents and more. Unity Machine Learning Agents: Transforming Games with Artificial Intelligence. Unite Berlin 2018 Keynote: Unity partners with Google, launches Ml-Agents ToolKit 0.4, Project MARS and more.
Read more
  • 0
  • 0
  • 3816

article-image-satya-nadella-microsofts-progress-data-ai-business-applications-trust-privacy
Sugandha Lahoti
17 Oct 2018
5 min read
Save for later

Satya Nadella reflects on Microsoft's progress in areas of data, AI, business applications, trust, privacy and more.

Sugandha Lahoti
17 Oct 2018
5 min read
Microsoft CEO, Satya Nadella published his letter to shareholders in the company’s 2018 annual report, on LinkedIn yesterday. He talks about Microsoft’s accomplishments in the past year, results and progress of Microsoft’s workplace, business applications, infrastructure, data, AI, and gaming. He also mentioned the data and privacy rules adopted by Microsoft, and their belief to, “ instill trust in technology across everything they do.” Microsoft’s result and progress Data and AI Azure Cosmos DB has already exceeded $100 million in annualized revenue. The company also saw rapid customer adoption of Azure Databricks for data preparation, advanced analytics, and machine learning scenarios. Their Azure Bot Service has nearly 300,000 developers, and they are on the road for building the world’s first AI supercomputer in Azure. Microsoft also acquired GitHub to recognize the increasingly vital role developers will play in value creation and growth across every industry. Business Applications Microsoft’s investments in Power BI have made them the leader in business analytics in the cloud. Their Open Data Initiative with Adobe and SAP will help customers to take control of their data and build new experiences that truly put people at the center. HoloLens and mixed reality will be used for designing for first-line workers, who account for 80 percent of the world’s workforce. New solutions powered by LinkedIn and Microsoft Graphs help companies manage talent, training, and sales and marketing. Applications and Infrastructure Azure revenue grew 91 percent year-over-year and the company is investing aggressively to build Azure as the world’s computer. They added nearly 500 new Azure capabilities in the past year, focused on both existing workloads and new workloads such as IoT and Edge AI. Microsoft expanded their global data center footprint to 54 regions. They introduced Azure IoT and Azure Stack and Azure Sphere. Modern Workplace More than 135 million people use Office 365 commercial every month. Outlook Mobile is also employed on 100 million iOS and Android devices worldwide. Microsoft Teams is being used by more than 300,000 organizations of all sizes, including 87 of the Fortune 100. Windows 10 is active on nearly 700 million devices around the world. Gaming The company surpassed $10 billion in revenue this year for gaming. Xbox Live now has 57 million monthly active users, and they are investing in new services like Mixer and Game Pass. They also added five new gaming studios this year including PlayFab to build a cloud platform for the gaming industry across mobile, PC and console. Microsoft’s impact around the globe Nadella highlighted that companies such as Coca-Cola, Chevron Corporation, ZF Group, a car parts manufacturer in Germany are using Microsoft’s technology to build their own digital capabilities. Walmart is also using Azure and Microsoft 365 for transforming the shopping experience for customers. In Kenya, M-KOPA Solar, one of their partners connected homes across sub-Saharan Africa to solar power using the Microsoft Cloud. Office Dynamics 365 was used in Arizona to improve outcomes among the state’s 15,000 children in foster care. MedApp is using HoloLens in Poland to help cardiologists visualize a patient's heart as it beats in real time. In Cambodia, underserved children in rural communities are learning to code with Minecraft. How Microsoft is handling trust and responsibility Microsoft motto is “instilling trust in technology across everything they do.” Nadella says, “We believe that privacy is a fundamental human right, which is why compliance is deeply embedded in all our processes and practices.” Microsoft has extended the data subject rights of GDPR to all their customers around the world, not just those in the European Union, and advocated for the passage of the CLOUD Act in the U.S. They also led the Cybersecurity Tech Accord, which has been signed by 61 global organizations, and are calling on governments to do more to make the internet safe. They announced the Defending Democracy Program to work with governments around the world to help safeguard voting and introduced AccountGuard to offer advanced cybersecurity protections to political campaigns in the U.S. The company is also investing in tools for detecting and addressing bias in AI systems and advocating government regulation. They are also addressing society's most pressing challenges with new programs like AI for Earth, a five-year, $50M commitment to environmental sustainability, and AI for Accessibility to benefit people with disabilities. Nadella further adds, “Over the past year, we have made progress in building a diverse and inclusive culture where everyone can do their best work.” Microsoft has nearly doubled the number of women corporate vice presidents at Microsoft since FY16.  They have also increased African American/Black and Hispanic/Latino representation by 33 percent. He concludes saying that “I’m proud of our progress, and I’m proud of the more than 100,000 Microsoft employees around the world who are focused on our customers’ success in this new era.” Read the full letter on Linkedin. Paul Allen, Microsoft co-founder, philanthropist, and developer dies of cancer at 65. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members
Read more
  • 0
  • 0
  • 3783
article-image-pluribus-an-ai-bot-built-by-facebook-and-cmu-researchers-has-beaten-professionals-at-six-player-no-limit-texas-hold-em-poker
Sugandha Lahoti
12 Jul 2019
5 min read
Save for later

Pluribus, an AI bot built by Facebook and CMU researchers, has beaten professionals at six-player no-limit Texas Hold ’Em Poker

Sugandha Lahoti
12 Jul 2019
5 min read
Researchers from Facebook and Carnegie Mellon University have developed an AI bot that has defeated human professionals in six-player no-limit Texas Hold’em poker.   Pluribus defeated pro players in both “five AIs + one human player” format and a “one AI + five human players” format. Pluribus was tested in 10,000 games against five human players, as well as in 10,000 rounds where five copies of the AI  played against one professional. This is the first time an AI bot has beaten top human players in a complex game with more than two players or two teams. Pluribus was developed by Noam Brown of Facebook AI Research and Tuomas Sandholm of Carnegie Mellon University. Pluribus builds on Libratus, their previous poker-playing AI which defeated professionals at Heads-Up Texas Hold ’Em, a two-player game in 2017. Mastering 6-player Poker for AI bots is difficult considering the number of possible actions. First, obviously since this involves six players, the games have a lot more variables and the bot can’t figure out a perfect strategy for each game - as it would do for a two player game. Second, Poker involves hidden information, in which a player only has access to the cards that they see. AI has to take into account how it would act with different cards so it isn’t obvious when it has a good hand. Brown wrote on a Hacker News thread, “So much of early AI research was focused on beating humans at chess and later Go. But those techniques don't directly carry over to an imperfect-information game like poker. The challenge of hidden information was kind of neglected by the AI community. This line of research really has its origins in the game theory community actually (which is why the notation is completely different from reinforcement learning). Fortunately, these techniques now work really really well for poker.” What went behind Pluribus? Initially, Pluribus engages in self-play by playing against copies of itself, without any data from human or prior AI play used as input. The AI starts from scratch by playing randomly, and gradually improves as it determines which actions, and which probability distribution over those actions, lead to better outcomes against earlier versions of its strategy. Pluribus’s self-play produces a strategy for the entire game offline, called the blueprint strategy. This online search algorithm can efficiently evaluate its options by searching just a few moves ahead rather than only to the end of the game. Pluribus improves upon the blueprint strategy by searching for a better strategy in real time for the situations it finds itself in during the game. Real-time search The blueprint strategy in Pluribus was computed using a variant of counterfactual regret minimization (CFR). The researchers used Monte Carlo CFR (MCCFR) that samples actions in the game tree rather than traversing the entire game tree on each iteration. Pluribus only plays according to this blueprint strategy in the first betting round (of four), where the number of decision points is small enough that the blueprint strategy can afford to not use information abstraction and have a lot of actions in the action abstraction. After the first round, Pluribus instead conducts a real-time search to determine a better, finer-grained strategy for the current situation it is in. https://youtu.be/BDF528wSKl8 What is astonishing is that Pluribus uses very little processing power and memory, less than $150 worth of cloud computing resources. The researchers trained the blueprint strategy for Pluribus in eight days on a 64-core server and required less than 512 GB of RAM. No GPUs were used. Stassa Patsantzis, a Ph.D. research student appreciated Pluribus’s resource-friendly compute power. She commented on Hacker News, “That's the best part in all of this. I'm hoping that there is going to be more of this kind of result, signaling a shift away from Big Data and huge compute and towards well-designed and efficient algorithms.” She also said how this is significantly lesser than ML algorithms used at DeepMind and Open AI. “In fact, I kind of expect it. The harder it gets to do the kind of machine learning that only large groups like DeepMind and OpenAI can do, the more smaller teams will push the other way and find ways to keep making progress cheaply and efficiently”, she added. Real-life implications AI bots such as Pluribus give a better understanding of how to build general AI that can cope with multi-agent environments, both with other AI agents and with humans. A six-player AI bot has better implications in reality because two-player zero-sum interactions (in which one player wins and one player loses) are common in recreational games, but they are very rare in real life.  These AI bots can be used for handling harmful content, dealing with cybersecurity challenges, or managing an online auction or navigating traffic, all of which involve multiple actors and/or hidden information. Apart from fighting online harm, four-time World Poker Tour title holder Darren Elias helped test the program's skills, said, Pluribus could spell the end of high-stakes online poker. "I don't think many people will play online poker for a lot of money when they know that this type of software might be out there and people could use it to play against them for money." Poker sites are actively working to detect and root out possible bots. Brown, Pluribus' developer, on the other hand, is optimistic. He says it's exciting that a bot could teach humans new strategies and ultimately improve the game. "I think those strategies are going to start penetrating the poker community and really change the way professional poker is played," he said. For more information on Pluribus and it’s working, read Facebook’s blog. DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa OpenAI Five bots destroyed human Dota 2 players this weekend
Read more
  • 0
  • 0
  • 3769

article-image-unity-announces-a-new-automotive-division-and-two-day-unity-autotech-summit
Sugandha Lahoti
18 May 2018
3 min read
Save for later

Unity announces a new automotive division and two-day Unity AutoTech Summit

Sugandha Lahoti
18 May 2018
3 min read
Unity technologies have made a startling announcement of plunging into the automotive and transportation industry.  With their newly formed automotive division, they plan to pass on their rendering technology to auto creators. They plan to show off this technology at their very first Unity AutoTech Summit at Unite Berlin, scheduled to happen during June 19-21 this year. As John Riccitiello, Chief Executive Officer, Unity Technologies, describes it “The real-time revolution in automotive is here. Over the past 15 years, we’ve made great strides leading the game development industry – now, we’re bringing our real-time rendering technology to a new group of creators, equipping automakers with the tools that will allow them to iterate at the speed of thought.” Unity Automotive Division This automotive division will bring real-time 3D, VR, and AR technologies to the world’s automotive original equipment manufacturers (OEMs) and suppliers through the Unity engine. The division is led by of experts from key automobile companies like Volkswagen, Renault, GM, Delphi, and Denso. Unity has already been working alongside the world’s top OEMs. Including Audi (VR design review), Volkswagen (interactive VR training for 10,000 employees), Cadillac (Virtual Showroom) and Mercedes-Benz (AMG Powerwall). Unity AutoTech Summit The Unity AutoTech summit that will grace Unite Berlin, is a one of a kind, two-day gathering of sessions, tech demos, and networking dedicated to the automotive industry. Featured sessions will include: Bringing the Lexus LC500 to Life Through the Magic of Unity by David Telfer, (Lexus), Joe DeMiero, and Carl Seibert from Lexus. How to Drive VR/AR Use Cases for Enterprises Using the Example of Volkswagen by Torben Volkwein from Volkswagen. Creating Powerful Mixed Reality Applications Across Auto by Jason Yim from Trigger Global for Nissan. Next Level Rendering Quality for Automotive by Arisa Scott  from Unity Unity Training Workshops Taster: Introduction to Automotive Design Visualization by Anuja Dharkar from Unity. Unity in Automotive - The Road Ahead by Tim McDonough and Ed Martin from Unity. Unity for Enterprise Unity and PiXYZ have also partnered to launch the enterprise-level Unity Industry Bundle. This bundle consists of PiXYZ products, training, and Unity Pro. It streamlines the data preparation and import of CAD data for creating real-time experiences in Unity. It provides services like design and engineering, AR/VR training, and the creation of high-impact customer experiences for both datacenters to individuals. Visit the Unity Automotive and Transportation website for the list of Unity’s entire solutions. Put your game face on! Unity 2018.1 is now available Unity plugins for augmented reality application development Unity 2D & 3D game kits simplify Unity game development for beginners
Read more
  • 0
  • 0
  • 3678