Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - 2D Game Development

38 Articles
article-image-epic-releases-unreal-engine-4-22-focuses-on-adding-photorealism-in-real-time-environments
Sugandha Lahoti
03 Apr 2019
4 min read
Save for later

Epic releases Unreal Engine 4.22, focuses on adding “photorealism in real-time environments”

Sugandha Lahoti
03 Apr 2019
4 min read
Epic games released a new version of it’s flagship game engine, Unreal Engine 4.22. This release comes with a total of 174 improvements, focused on “pushing the boundaries of photorealism in real-time environments”. It also comes with improved build times, up to 3x faster, and new features such as real-time ray tracing. Unreal Engine 4.22 also adds support for Microsoft HoloLens remote streaming and Visual Studio 2019. What’s new in Unreal Engine 4.22? Real-Time Ray Tracing and Path Tracing (Early Access): The Ray Tracing features, first introduced in a preview in mid-february,  are composed of a series of ray tracing shaders and ray tracing effects. They help in achieving natural realistic looking lighting effects in real-time. The Path Tracer includes a full global illumination path for indirect lighting that creates ground truth reference renders right inside of the engine. This improves workflow content in a scene without needing to export to a third-party offline path tracer for comparison. New Mesh drawing pipeline: The new pipeline for Mesh drawing results in faster caching of information for static scene elements. Automatic instancing merges draw calls where possible, resulting in four to six time fewer lines of code. This change is a big one so backwards compatibility for Drawing Policies is not possible. Any Custom Drawing Policies will need to be rewritten as FMeshPassProcessors in the new architecture. Multi-user editing (Early Access): Simultaneous multi user editing allows multiple level designers and artists to connect multiple instances of Unreal Editor together to work collaboratively in a shared editing session. Faster C++ iterations: Unreal has licensed Molecular Matters' Live++ for all developers to use on their Unreal Engine projects, and integrated it as the new Live Coding feature. Developers can now make C++ code changes in their development environment and compile and patch it into a running editor or standalone game in a few seconds. UE 4.22 also optimizes UnrealBuildTool and UnrealHeaderTool, reducing build times and resulting in up to 3x faster iterations when making C++ code changes. Improved audio with TimeSynth (Early access): TimeSynth is a new audio component with features like sample accurate starting, stopping, and concatenation of audio clips. Also includes precise and synchronous audio event queuing. Enhanced Animation: Unreal Engine 4.22 comes with a new Animation Plugin which is based upon the Master-Pose Component system and adds blending and additive Animation States. It reduces the overall amount of animation work required for a crowd of actors. This release also features an Anim Budgeter tool to help developers set a fixed budget per platform (ms of work to perform on the gamethread). Improvements in the Virtual Production Pipeline: New Composure UI: Unreal’s built-in compositing tool Composure has an updated UI to achieve real time compositing capabilities to build images, video feeds, and CG elements directly within the Unreal Engine. OpenColorIO (OCIO) color profiles: Unreal Engine now supports the Open Color IO framework for transforming the color space of any Texture or Composure Element directly within the Unreal Engine. Hardware-accelerated video decoding (Experimental): On Windows platforms, UE 4.22 can use the GPU to speed up the processing of H.264 video streams to reduce the strain on the CPU when playing back video streams. New Media I/O Formats: UE 4.22 ships with new features for professional video I/O input formats and devices, including 4K UHD inputs for both AJA and Blackmagic and AJA Kona 5 devices. nDisplay improvements (Experimental): Several new features make the nDisplay multi-display rendering system more flexible, handling new kinds of hardware configurations and inputs. These were just a select few updates. To learn more about Unreal Engine 4.22 head on over to the Unreal Engine blog. Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR) Unreal Engine 4.20 released with focus on mobile and immersive (AR/VR/MR) devices Implementing an AI in Unreal Engine 4 with AI Perception components [Tutorial]
Read more
  • 0
  • 0
  • 4592

article-image-godot-3-1-released-with-improved-c-support-opengl-es-2-0-renderer-and-much-more
Savia Lobo
15 Mar 2019
4 min read
Save for later

Godot 3.1 released with improved C# support, OpenGL ES 2.0 renderer and much more!

Savia Lobo
15 Mar 2019
4 min read
On 13 March, Wednesday, the Godot developers announced the release of a new version of the open source 2D and 3D cross-platform compatible game engine, Godot 3.1. This new version includes the much-requested improvements to the major release, Godot 3.0. Improved features in the Godot 3.1 OpenGL ES 2.0 renderer Rendering is done entirely on sRGB color space (the GLES3 renderer uses linear color space). This is much more efficient and compatible, but it means that HDR will not be supported. Some advanced PBR features such as subsurface scattering are not supported. Unsupported features will not be visible when editing materials. Some shader features will not work and throw an error when used. Also, some post-processing effects are not present either. Unsupported features will not be visible when editing environments. GPU-based Particles will not work as there is no transform feedback support. Users can use the new CPUParticles node instead. Optional typing in GDScript This has been one of the most requested Godot features from day one. GDScript allows to write code in a quick way within a controlled environment. The code editor will now show which lines are safe with a slight highlight of the line number. This will be vital in the future to optimize small pieces of code which may require more performance. Revamped Inspector The Godot inspector has been rewritten from scratch. It includes features such as proper vector field editing, sub-inspectors for resource editing, better custom visual editors for many types of objects, very comfortable to use spin-slider controls, better array and dictionary editing and many more features. Kinematicbody2d (and 3d) improvements Kinematic bodies are among Godot's most useful nodes. They allow creating very game-like character motion with little effort. For Godot 3.1 they have been considerably improved with: Support for snapping the body to the floor. Support for RayCast shapes in kinematic bodies. Support for synchronizing kinematic movement to physics, avoiding a one-frame delay. New Axis Handling system Godot 3.1 uses the novel concept of "action strength". This approach allows using actions for all use cases and it makes it very easy to create in-game customizable mappings and customization screens. Visual Shader Editor This was a pending feature to re-implement in Godot 3.0, but it couldn't be done in time back then. The new version has new features such as PBR outputs, port previews, and easier to use mapping to inputs. 2D Meshes Godot now supports 2D meshes, which can be used from code or converted from sprites to avoid drawing large transparent areas. 2D Skeletons It is now possible to create 2D skeletons with the new Skeleton2D and Bone2D nodes. Additionally, Polygon2D vertices can be assigned bones and weight painted. Adding internal vertices for better deformation is also supported. Constructive Solid Geometry (CSG) CSG tools have been added for fast level prototyping, allowing generic primitives and custom meshes to be combined via boolean operations to generate more complex shapes. They can also become colliders to test together with physics. CPU-based particle system Godot 3.0 integrated a GPU-based particle system, which allows emitting millions of particles at little performance cost. The developers added alternative CPUParticles and CPUParticles2D nodes that perform particle processing using the CPU (and draw using the MultiMesh API). These nodes open the window for adding features such as physics interaction, sub-emitters or manual emission, which are not possible using the GPU. More VCS-friendly The new 3.1 version includes some very requested enhancements such as: Folded properties are no longer saved in scenes. This avoids unnecessary history pollution. Non-modified properties are no longer saved. This reduces text files considerably and makes history even more readable. Improved C# support In Godot 3.1, C# projects can be exported to Linux, macOS, and Windows. Support for Android, iOS, and HTML5 will come soon. To know about other improvements in detail, visit the changelog or the official website. Microsoft announces Game stack with Xbox Live integration to Android and iOS OpenAI introduces Neural MMO, a multiagent game environment for reinforcement learning agents Google teases a game streaming service set for Game Developers Conference
Read more
  • 0
  • 0
  • 4311

article-image-epic-games-at-gdc-announces-epic-megagrants-rtx-powered-ray-tracing-demo-and-free-online-services-for-game-developers
Natasha Mathur
22 Mar 2019
4 min read
Save for later

Epic Games announces: Epic MegaGrants, RTX-powered Ray tracing demo, and free online services for game developers

Natasha Mathur
22 Mar 2019
4 min read
Epic Games, an American video game and software development company, made a series of announcements, earlier this week. These include: Epic Game’s CEO, Tim Sweeney to offer $100 million in grants to game developers Stunning RTX-powered Ray-Tracing Demo named Troll Epic’s free Online Services launch for game developers Epic MegaGrants: $100 million funds to Game Developers Tim Sweeney, CEO, Epic Games Inc, announced earlier this week that he will be offering $100 million in grants to game developers to boost the growth of the gaming industry. Sweeney made the announcement during a presentation on Wednesday at the Game Developers Conference (GDC). GDC is the world's largest professional game industry event that ended yesterday in San Francisco. Epic Games also created a $5 million fund for grants that have been disbursed over the last three years. Now Epic Games is off to build a new fund called Epic MegaGrants. These are “no-strings-attached” grants, meaning that they don’t consist of any contracts requiring game developers to do anything for Epic. All that game developers need to do is apply for the grants, create an innovative project, and if the Epic’s judges find it worthy, they’ll offer them the funds. “There are no commercial hooks back to Epic. You don’t have to commit to any deliverables. This is our way of sharing Fortnite’s unbelievable success with as many developers as we can”, said Sweeney. Troll: a Ray Tracing Unreal Engine 4 Demo Another eye-grabbing moment at GDC this year was a “visually stunning” ray tracing demo revealed by Goodbye Kansas and Deep Forest Films called "Troll”. Troll was rendered in real time using Unreal Engine 4.22 ray tracing and camera effects. And powered by a NVIDIA’s single GeForce RTX 2080 Ti graphics card.  Troll is visually inspired by Swedish painter and illustrator John Bauer, whose illustrations are famous for Swedish folklore and fairy tales anthology known as ‘Among Gnomes and Trolls’. https://www.youtube.com/watch?v=Qjt_MqEOcGM                                                            Troll “Ray tracing is more than just reflections — it’s about all the subtle lighting interactions needed to create a natural, beautiful image. Ray tracing adds these subtle lighting effects throughout the scene, making everything look more real and natural,” said Nick Penwarden, Director of Engineering for Unreal Engine at Epic Games. NVIDIA team states in a blog post that Epic Games has been working to integrate RTX-accelerated ray tracing into its popular Unreal Engine 4. In fact, Unreal Engine 4.22 will have the support for new Microsoft DXR API for real-time ray tracing. Epic’s free online services launch for game developers Epic Games also announced the launch of free tools and services, part of the Epic Online Services, which was announced in December 2018. The SDK is available via the new developer portal for immediate download and use. SDK currently supports Windows, Mac, and Linux. Moreover, the SDK, as a part of the release, provides support for two free services, namely, game analytics and player ticketing. Game analytics help developers understand player behavior. It features DAU (Daily active users), MAU (Monthly active users), retention, new player counts, game launch counts, online user count, and more. The ticketing system connects players directly with developers and allows them to report bugs or other problems. These two services will continue to evolve along with the rest of Epic Online Services (EOS) to offer infrastructure and tools required by the developers to launch, operate, and scale the high-quality online games. Epic games will also be offering additional free services throughout 2019, including player data storage, player reports, leaderboards & stats, player identity, player inventory, matchmaking etc. “We are committed to developing EOS with features that can be used with any engine, any store and that can support any major platform...these services will allow developers to deliver cross-platform gameplay experiences that enable players to enjoy games no matter what platform they play on”, states the Epic Games team. Fortnite server suffered a minor outage, Epic Games was quick to address the issue Epic games CEO calls Google “irresponsible” for disclosing the security flaw in Fortnite Android installer Fortnite creator Epic games launch Epic games store where developers get 88% of revenue earned
Read more
  • 0
  • 0
  • 3906
Visually different images

article-image-now-you-can-play-assassins-creed-in-chrome-thanks-to-googles-new-game-streaming-service
Natasha Mathur
03 Oct 2018
2 min read
Save for later

Now you can play Assassin’s Creed in Chrome thanks to Google’s new game streaming service

Natasha Mathur
03 Oct 2018
2 min read
Google announced a new experimental game streaming service, namely, Project Stream, earlier this week. Google calls this project a “technical test” and has partnered up with Ubisoft, one of the most popular video game publishers, to stream their upcoming Assassin’s Creed Odyssey via Project Stream on Chrome. “We’ve been working on Project Stream, a technical test to solve some of the biggest challenges of streaming. For this test, we’re going to push the limits with one of the most demanding applications for streaming—a blockbuster video game,” writes Catherine Hsiao on the announcement blog post. Google points out that their major goal with Project Stream is for it to effectively stream AAA game titles. This is because the Google team is inspired by the technology that goes behind AAA video games. Additionally, working with a AAA game title is more challenging as opposed to working with a game that comprises less intense graphics. “Every pixel is powered by an array of real-time rendering technology, artistry, visual effects, animation, simulation, physics, and dynamics. We’re inspired by the game creators who spend years crafting these amazing worlds, adventures, and experiences, and we’re building technology that we hope will support and empower that creativity,” states the post.   Project Stream  With Project Stream, Google is working to ensure that latency stays minimal and the graphics of the game are not compromised when using its streaming service. “The idea of streaming such graphically-rich content that requires near-instant interaction between the game controller and the graphics on the screen poses a number of challenges.  When streaming TV or movies, consumers are comfortable with a few seconds of buffering at the start, but streaming high-quality games requires latency measured in milliseconds, with no graphics degradation,” adds Google. Google has made limited spaces available for users to try Project Stream, starting October 5. If you want to participate then you can apply on Project Stream’s official website. Participation is only open for the U.S. residents who are 17 years or older. For more information, check out the official announcement. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable Google announces new Artificial Intelligence features for Google Search on its 20th birthday Google announces the Beta version of Cloud Source Repositories
Read more
  • 0
  • 0
  • 3852

article-image-ces-2019-top-announcements-made-so-far
Sugandha Lahoti
07 Jan 2019
3 min read
Save for later

CES 2019: Top announcements made so far

Sugandha Lahoti
07 Jan 2019
3 min read
CES 2019, the annual consumer electronics show in Las Vegas will go from Tuesday, Jan. 8 through Friday, Jan. 11. However, the conference has unofficially kicked off on Sunday, January 6, followed by press conferences on Monday, Jan. 7. Over the span of these two days, a lot of companies showcased their latest projects and announced new products, software, and services. Let us look at the key announcements made by prominent tech companies so far. Nvidia Nvidia CEO Jensen Huang unveiled some "amazing new technology innovations." First, they announced that over 40 new laptop models in 100-plus configurations will be powered by NVIDIA GeForce RTX GPUs. Turing-based laptops will be available across the GeForce RTX family — from RTX 2080 through RTX 2060 GPUs, said Huang. Seventeen of the new models will feature Max-Q design. Laptops with the latest GeForce RTX GPUs will also be equipped with WhisperMode, NVIDIA Battery Boost, and NVIDIA G-SYNC. GeForce RTX-powered laptops will be available starting Jan. 29 from the world's top OEMs. Nvidia also announced the first 65-inch 4K HDR gaming display that will arrive in February for $4,999. LG LG Electronics, which have a major press release today, has already confirmed a variety of their new products. These include the release of LG's 2019 TVs with Alexa and Google Assistant support, 8K OLED, full HDMI 2.1 support and more. Also includes, LG CineBeam Laser 4K projector for voice control, new sound bars included with Dolby Atmos and Google Assistant and LG Gram 17 and new 14-inch 2-in-1. Samsung Samsung announced that their Smart TVs will be soon equipped with iTunes Movies & TV Shows and will support AirPlay 2 beginning Spring 2019. AirPlay 2 support will be available on Samsung Smart TVs in 190 countries worldwide. Samsung is also launching a new Notebook Odyssey to take PC gaming more seriously posing a threat to competitors Razer and Alienware. HP HP also announced HP Chromebook 14, at CES 2019. It is the world's first AMD-powered Chromebook running on either an AMD A4 or A6 processor with integrated Radeon R4 or R5 graphics. It has 4GB of memory and 32GB of storage and support for Android apps from the Google Play Store. These models will start shipping in January starting at $269. More announcements: Asus launches a new 17-inch, 10-pound Surface Pro gaming laptop, the Asus ROG Mothership. It has also announced Zephyrus S GX701, the smallest and lightest 17-inch gaming laptop yet. Corsair’s impressive compact gaming desktops come with Core i9 chips and GeForce RTX graphics L’Oréal’s newest prototype detects wearers’ skin pH levels Acer’s new Swift 7 will kill the bezel when it launches in May for $1,699. It is one of the thinnest and lightest laptops ever made Audeze’s motion-aware headphones will soon recreate your head gestures in-game Whirlpool is launching a Wear OS app for its connected appliances with simplified voice commands for both Google Assistant and Alexa devices. Vuzix starts selling its AR smart glasses for $1,000 Pico Interactive just revealed the Pico G2 4K, an all-in-one 4K VR headset based-on China’s best-selling VR unit, the Pico G2. It’s incredibly lightweight, powerful and highly customizable for enterprise purposes. Features include kiosk mode, hands-free controls, and hygienic design. You can have a look at all products that will be showcased at CES 2019. NVIDIA launches GeForce Now’s (GFN) ‘recommended router’ program to enhance the overall performance and experience of GFN NVIDIA open sources its game physics simulation engine, PhysX, and unveils PhysX SDK 4.0 Uses of Machine Learning in Gaming
Read more
  • 0
  • 0
  • 3577

article-image-nvidia-open-sources-its-game-physics-simulation-engine-physx-and-unveils-physx-sdk-4-0
Natasha Mathur
04 Dec 2018
2 min read
Save for later

NVIDIA open sources its game physics simulation engine, PhysX, and unveils PhysX SDK 4.0

Natasha Mathur
04 Dec 2018
2 min read
NVIDIA team unveiled PhysX SDK 4.0, yesterday, and also announced that it's making its popular real-time physics simulation engine, PhysX, available as open source under the simple BSD-3 license. “We’re doing this because physics simulation — a long key to immersive games and entertainment — turns out to be more important than we ever thought. PhysX will now be the only free, open-source physics solution that takes advantage of GPU acceleration and can handle large virtual environments”, says the NVIDIA team. NVIDIA had designed PhysX specifically for the purpose of hardware acceleration using powerful processors that comprise hundreds of processing cores. This design offers a dramatic boost in the physics processing power, which in turn, takes the gaming experience to a whole new level, offering more rich, and immersive physical gaming environments. The new PhysX SDK 4.0 is a scalable, open source, and multi-platform game physics solution that offers support to a wide range of devices, ranging from smartphones to high-end multicore CPUs and GPUs. PhysX 4.0 SDK has been upgraded to offer industrial-grade simulation quality at game simulation levels. PhysX 4.0 comes with Temporal Gauss-Seidel Solver (TGS), that is capable of adjusting the constraints within games with each iteration, depending on the bodies’ relative motion. Other than that, the overall stability has been improved and now allows for new filtering rules for kinematics and statics. Some of the major features of PhysX SDK 4.0, includes effective memory usage management, support offered for different measurement units and scales, multiple broad-phase, convex-mesh, triangle mesh, and primitive shape collision detection algorithms. PhysX SDK 4.0 will be made available on December 20, 2018. Public reaction to the news is largely positive as PhysX was earlier available for commercial use for free, but now that its available as open source, people can interact deeply with the physics engine, modifying it as per their needs at absolutely no cost. https://twitter.com/puradawid/status/1069614540671909888 https://twitter.com/tauke/status/1069603803463184384 For more information, check out the official NVIDIA blog post. NVIDIA open sources its material definition language, MDL SDK NVIDIA unveils a new Turing architecture: “The world’s first ray tracing GPU” BlazingDB announces BlazingSQL , a GPU SQL Engine for NVIDIA’s open source RAPIDS
Read more
  • 0
  • 0
  • 3485
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-game-publisher-activision-blizzard-to-begin-massive-layoffs-800-employees-to-be-dismissed
Sugandha Lahoti
13 Feb 2019
2 min read
Save for later

Game publisher, Activision-Blizzard to begin massive layoffs, 800 employees to be dismissed

Sugandha Lahoti
13 Feb 2019
2 min read
Game publisher Activision-Blizzard announced that it will lay off 8% of its staff of 9,600 employees. This announcement was made during the company’s fourth-quarter earnings call on Tuesday. A total of nearly 800 employees, that will be laid off will mostly belong to non-game development and administrative areas of the company. CEO of Activision-Blizzard Bobby Kotick said, “The move is being made in an effort at de-prioritizing initiatives that are not meeting expectations and reducing certain non-development and administrative-related costs across the business." The company is undergoing restructuring because of missed expectations for 2018 and lowered expectations for 2019. It will boost up the numbers of employees in the development team of its franchises like Call of Duty and Diablo. In a note to its staff obtained by Kotaku, Blizzard, president J. Allen Brack said, “Currently staffing levels on some teams are out of proportion with our current release slate. This means we need to scale down some areas of our organization. I’m sorry to share that we will be parting ways with some of our colleagues in the U.S. today.” The letter also promised “a comprehensive severance package”, says Kotaku. It will include continued health benefits, career coaching, and job placement assistance as well as profit-sharing bonuses for the previous year to those who are being laid off at Blizzard. In the official press release on Blizzard's website, Brack said Blizzard is dedicated to bringing their unannounced projects to life. They will focus heavily on Esports and the Overwatch League, which is their biggest esports brand. Twitterati is highly disappointed with this news and is sympathetic towards dismissed employees. https://twitter.com/jasonschreier/status/1095374774728048640 https://twitter.com/hitstreak/status/1095456359594610689 https://twitter.com/day9tv/status/1095390958584131584 Other organizations are also offering job opportunities to those in need. https://twitter.com/ScottLowe/status/1094052545297711104   https://twitter.com/MitchyD/status/1094061851804078082 Instacart changes its “tips stealing” policy after facing workers backlash Per the new GDC 2019 report, nearly 50% of game developers think game industry workers should unionize Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley
Read more
  • 0
  • 0
  • 3335

article-image-ebiten-1-8-a-2d-game-library-in-go-is-here-with-experimental-webassembly-support-and-newly-added-apis
Natasha Mathur
19 Oct 2018
3 min read
Save for later

Ebiten 1.8, a 2D game library in Go, is here with experimental WebAssembly support and newly added APIs

Natasha Mathur
19 Oct 2018
3 min read
The Go team has released version 1.8 of its 2D game library called Ebiten, yesterday. Ebiten 1.8 comes with new features such as experimental WebAssembly Port, newly added APIs, and bug fixes among other updates. Ebiten is a very simple 2D game library in Go that offers 2D graphics (Geometry/Color matrix transformation, Various composition modes, Offscreen rendering, Fullscreen, Text rendering), input and audio support. Experimental WebAssembly Port The Go team has added a WebAssembly port to Ebiten 1.8, but this is still in the experimental phase. This new feature compiles to a single WebAssembly module including the Go runtime for goroutine scheduling, garbage collection, maps, and other Go essentials, that results in a module of at least 2MB, or 500KB when compressed. WebAssembly refers to a binary instruction format for a stack-based virtual machine. It is designed for the compilation of high-level languages such as C/C++/Rust. This helps with easily deploying the apps on client and server applications. New APIs added New APIs have been added for different features such as polygon, TPS, Vsync, Package audio, and Package ebitenutil, in Ebiten 1.8. For polygon, type DrawTrianglesOptions API has been added which represents options to render triangles on an image. Another API type Vertex has also been added that represents a vertex passed to DrawTriangles. For TPS, func CurrentTPS() float64 API is added that returns the current TPS and represents the number of update functions called in a second. Another added API, func MaxTPS() int: returns the current maximum TPS. Also, func SetMaxTPS(tps int) API is added that sets the maximum TPS (ticks per second) and represents the number of the updating function called per second. For Vsync, func IsVsyncEnabled() bool API returns a boolean value that indicates if the game is using the display's vsync. Another func SetVsyncEnabled(enabled bool) API sets a boolean value that indicates if the game uses the display's vsync. For Package audio, the func (c *Context) IsReady() bool API returns a boolean value that indicates whether the audio is ready or not.  For Package ebitenutil, func DebugPrintAt(image *ebiten.Image, str string, x, y int) API draws the string str on the image at (x, y) position. Bug Fixes The bug causing multi monitors issue has been fixed. Also, issues related to macOS 10.14 Mojave has been fixed. For more information, check out the official release notes. GitHub is bringing back Game Off, its sixth annual game building competition, in November Microsoft announces Project xCloud, a new Xbox game streaming service, on the heels of Google’s Stream news last week Now you can play Assassin’s Creed in Chrome thanks to Google’s new game streaming service
Read more
  • 0
  • 0
  • 3231

article-image-key-takeaways-from-unity-game-studio-report-2018
Natasha Mathur
31 Aug 2018
3 min read
Save for later

Key Takeaways from the Unity Game Studio Report 2018

Natasha Mathur
31 Aug 2018
3 min read
The Unity Team has come out with  Unity Game Studio Report 2018 to share insights of its relevant benchmarking data on the existing studios with other emerging studios. The aim is to share information with the emerging studios on how the fellow creative studio teams operate and make successful games. The Unity Game Studio Report 2018 has been collated based on a study with the leads of 1,445 small and medium independent creative studios (ranging in size from 2 to 50 employees) from across the globe. This includes the studios using Unity as their main game engine as well as the studios using other game engines. https://www.youtube.com/watch?v=4hoO_5qNel0 Unity Game Studio Report 2018 Let’s have a look at few of the major highlights of this report. Studios are recent, independent and compact As per the 2018 Unity Game Studio report, 91% of the surveyed studios that have been recently established are fully independent and the majority of them are developing their own IPs. Studios develop, publish and promote games on their own                                          Unity Game studio report 2018 40% of the existing and emerging studios are focussed on developing AR/VR, which proves that platforms are becoming more established among independent creators. The majority of studios are publishing their project themselves. For marketing, the popular media for these studios are Facebook and Twitter. The Unity Game Studio report also highlights that 53% of the studios monetize their projects via premium payments, while 36% of them plan on monetizing with the freemium model.   Studios need a wide range of tools to run 69% of the emerging studios use team collaboration along with cloud storage solutions. Less than 40% of the studios use analytics to analyze players’ behavior. Studios run on a lean budget                                               Unity Game studio report 2018 As mentioned in the Unity Game Studio Report, approximately 60% of the budget for all studios comes from freelancing and self-funding. But, a small part from their budget still gets spent on training the employees. The report also highlights the hard work that the majority of the independent game studios put in to continue to establish themselves. “Not only do (independent developers) bring their creative vision to life, they do so with ingenuity, flair, and lots of bootstraps, overcoming challenges posed by constrained resources with imagination, moxie, and dedication to their love of creating games”, as written by Jen MacLean, Executive Director at the International Game Developers Association (IGDA) in the report foreword. For more information, check out the complete Unity Game Studio Report 2018.   Unity switches to WebAssembly as the output format for the Unity WebGL build target Implementing the Unity game engine and assets for 2D game development [Tutorial] Designing UIs in Unity: What you should know
Read more
  • 0
  • 0
  • 3151

article-image-meet-widenes-a-new-tool-by-nintendo-to-let-you-experience-the-nes-classics-again
Natasha Mathur
30 Aug 2018
4 min read
Save for later

Meet wideNES: A new tool by Nintendo to let you experience the NES classics again

Natasha Mathur
30 Aug 2018
4 min read
Nintendo has come out with a new tool, called, wideNES, to let you relive your childhood days. Only this time, you can record the screen while playing in real-time, gradually building up a map of the different levels explored. The new tool wideNES, is a feature of ANESE, which is an NES emulator developed by Daniel Prilik. What’s great about wideNES is the fact that it syncs the action on-screen to the generated map, thereby, allowing players to see ahead of the levels by “peeking past the edge of the NES’s screen”. Also, this mapping technique is not applicable to only a few games i.e. it enables the wideNES to work with a wide range of NES games. Let’s look at how wideNES works. Rendering graphics Back in the 80s, the NES (Nintendo entertainment system) used MOS 6502 CPU. It also used a powerful graphics coprocessor called the Picture Processing Unit (PPU) in conjunction with the 6502 CPU. The wideNES also makes use of PPU. PPU is an integrated circuit in the NES which generates video signals from graphics data stored in memory. The chip is known for using very little memory to store graphical data. In wideNES, the CPU updates the PPU on what has changed throughout the game using Memory Mapped I/O.  This process comprises of setting up new sprite positions ( Great for moving objects: player, enemies, projectiles), new level data, and new viewport offsets. With wideNES running in an emulator, it’s easy to track the values written to the PPUSCROLL register (controls viewport X/Y offset) i.e. it’s easy to measure how much of the screen has been scrolled between two frames. But, there’s a limitation to this technique as you can’t get a complete map of the game unless the player manually explores the entire game. Scrolling past 256 The NES is an 8-bit system and in this, the PPUSCROLL register accepts only 8-bit values. This limited the maximum scroll-offset in NES to just 255px. So, on scrolling past 255, PPUSCROLL register would become 0, explaining why Smart Mario Bros would bounce-back to the start on Mario moving too far right. With wideNES, scrolling past 256 is possible as it completely ignores the PPUCTRL register, and simply looks at the PPUSCROLL delta between frames. So, in case the PPUSCROLL unexpectedly jumps up to ~256, it indicates that the player character has moved left/up a screen, whereas if the PPUSCROLL jumps down to ~0, then that means the player has moved right/down a screen. However, this approach does not work for games that have static-UI elements such as HUDs, Masks, and Status Bars at the edges of the screen. To solve this issue, wideNES implements several rules which detect and mask-off static screen elements automatically. Detecting “Scenes” Most NES games are split into many smaller “scenes” with doors or transition screens that move between them. The wideNES uses perceptual hashing to detect whenever a scene changes. Perceptual hash functions work on keeping the similar inputs “close” to one another in the output space making them perfect for detecting similar images. But, perceptual hashes can also get incredibly complex with some being able to detect similar images even if few of the images have been rotated, scaled, stretched, and color shifted. But, wideNES doesn’t need a complex hash function as each frame is always the exact same size. Now, work is still being done on improving wideNES core and on improving ANESE’s wideNES implementation. For now, you can explore the ANESE emulator and take the trip down the memory lane! For more information, check out the official wideNES blog post. Meet yuzu – an experimental emulator for the Nintendo Switch AI for game developers: 7 ways AI can take your game to the next level AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior
Read more
  • 0
  • 0
  • 3086
article-image-microsoft-announces-game-stack-with-xbox-live-integration-to-android-and-ios
Natasha Mathur
15 Mar 2019
3 min read
Save for later

Microsoft announces Game stack with Xbox Live integration to Android and iOS

Natasha Mathur
15 Mar 2019
3 min read
Microsoft has good news for all the game developers out there. It launched a new initiative, called Microsoft Game Stack yesterday, which includes an amalgamation of different Microsoft tools and services into a single robust ecosystem to ‘empower game developers’. It doesn’t matter whether you’re a rookie indie developer or an AAA studio, this developer-focused platform will make the game development process ten times easier for you. The main goal of Game Stack is to help developers easily find different tools and services required for game development at one spot. These tools range from Azure, PlayFab, DirectX, Visual Studio, to  Xbox Live, App Center, and Havok. Cloud plays a major role in Game Stack and it makes use of Azure to fulfill this requirement. Source: Microsoft Azure is globally available in 54 regions will help scale its Project xCloud (a service that streams games to PCs, consoles, and mobile devices) to provide an uninterrupted gaming experience for players worldwide. Not to forget, companies like Rare, Ubisoft and Wizards of the Coast are already hosting multiplayer game servers and storing their player data on Azure. It is also capable of analyzing game telemetry, protecting games from DDOS attacks, and training AI. Moreover, Microsoft Game Stack is device agnostic which makes it really convenient for the gamers. Another great component of Game Stack is a backend service for operating and building new games, called PlayFab. PlayFab offers game development services, real-time analytics, and LiveOps capabilities to Game Stack. PlayFab is also device agnostic. It supports iOS, Android, PC, Web, Xbox, Sony PlayStation, Nintendo Switch and all the other major game engines such as Unity and Unreal. Microsoft has also released a preview for five new PlayFab services. Out of these five, one, called, PlayFab Matchmaking is open for public preview, while other four including PlayFab Party, PlayFab Insights, PlayFab PubSub, and PlayFab user-generated Content are in private preview. Game Stack also comes with Xbox Live, one of the most engaging and interactive gaming communities in the world. Xbox Live will be providing identity and community services in the Game Stack. Microsoft has also expanded the cross-platform capabilities of Xbox Live under Game Stack with a new SDK for iOS and Android devices. Mobile developers will be able to easily connect with some of the most highly engaged and passionate gamers on the planet using Xbox Live. Other benefits of the Xbox Live SDK includes more focus on building games and leveraging Microsoft‘s trusted identity network to offer support for log-in, privacy, online safety and child accounts. Apart from that, there are features like gamerscore, and “hero” stats, that help keep the gamers engaged. Also, components such as Visual Studio, Mixer, DirectX, Azure App Center, Visual Studio, Visual Studio Code, and Havok are all a part of Game Stack. For more information, check out the official Microsoft Game Stack blog post. Microsoft open sources the Windows Calculator code on GitHub Microsoft open sources ‘Accessibility Insights for Web’, a chrome extension to help web developers fix their accessibility issue Microsoft researchers introduce a new climate forecasting model and a public dataset to train these models
Read more
  • 0
  • 0
  • 3078

article-image-improbable-says-unity-blocked-spatialos-unity-responds-saying-it-has-shut-down-improbable-and-not-spatial-os
Sugandha Lahoti
11 Jan 2019
4 min read
Save for later

Improbable says Unity blocked SpatialOS; Unity responds saying it has shut down Improbable and not Spatial OS

Sugandha Lahoti
11 Jan 2019
4 min read
A fresh drama has emerged between Unity and Improbable. According to yesterday’s blog post by the SpatialOS creator, Improbable says that Unity has blocked SpatialOS based on a recent change in Unity’s terms of service (clause 2.4). Unity has contested this stating Improbable's blog post was misleading and added that they have terminated their relationship with Improbable without affecting anyone using SpatialOS. What did Improbable say? Unity had updated their terms of service on Dec 5 and then informed Improbable directly on Jan 9 that their service has been revoked on the Unity’s game engine. Per the blog, “all existing SpatialOS games using Unity, including production games and in development games of all developers, are now in breach of Unity’s license terms.” The blog also states that Unity has put a stop for Improbable to continue working with the Unity engine, affecting their ability to support games. The blog post disapproved Unity’s decision stating that Unity’s actions have done harm to projects across the industry, especially affecting vulnerable or small scale developers. Moreover, this is a threat to games that have been funded based on the promise of SpatialOS to deliver next-generation multiplayer as their choice of game engine. The improbable team has also stated that going further they would be helping developers using SpatialOS with Unity to finish, release and operate their games and set up an emergency fund. They are also fully open-sourcing the code of SpatialOS Game Development Kit for Unity, under the MIT license. How did Unity respond? Unity has termed Improbable's blog as ‘incorrect’ stating that they have “terminated their relationship with Improbable due to a failed negotiation with them after they violated Unity’s Terms of Service. However, anyone using SpatialOS will not be affected.” Unity also assures that even if a game developer runs a Unity-based game server on their own servers or generic cloud instances (like GCP, AWS or Azure), they are covered by Unity’s EULA. “From a technical standpoint, this is what our clarification on our TOS means: if you want to run your Unity-based game-server, on your own servers, or a cloud provider that provides you instances to run your own server for your game, you are covered by our EULA. We will support you as long as the server is running on a Unity supported platform.” Unity blocked Improbable because the company was making unauthorized and improper use of Unity’s technology and name in connection with the development, sale, and marketing of its own products. Early last year, they informed Improbable in person that they were in violation of Unity’s Terms of Service. Then, after six months, Unity informed Improbable about the violation in writing. Seeing no changes, Unity decided to take strict action by turning off Improbable’s Unity Editor license keys, about two weeks ago. Unity says they are trying to resolve the dispute with Improbable without affecting developers. SpatialOS Developers will receive support for any outstanding questions or issues directly at [email protected]. What about Unity’s TOS Clause 2.4? Unity’s updated clause states that they are prohibiting "streaming or broadcasting so that any portion of the Unity Software is primarily executed on or simulated by the cloud or a remote server and transmitted over the Internet or other networks to end user devices..." This is alarming for Unity asset and service providers and developers. As explained by a gamedev.net user, this could mean that “any kind of processing offload for entity state occurring on a server or cloud provider (such as SpatialOS) is no longer allowed. As such, developers who planned to use Unity in any kind of distributed network capacity may find themselves in a difficult situation.” The creator of Epic Games, Tim Sweeney, has reacted harshly to this clause as well. "We specifically make the UE4 EULA apply perpetually so that when you obtain a version under a given EULA, you can stay on that version and operate under that EULA forever if you choose." https://twitter.com/TimSweeneyEpic/status/1083407460252217346 This battle has definitely added a boost to Unreal Engine’s popularity. https://twitter.com/patrickol/status/1083476747700576256 https://twitter.com/hippowombat/status/1083581963422691329 Epic Games have also said that it has partnered with Improbable to establish a $25 million fund to "assist developers who are left in limbo by the new engine and service incompatibilities that were introduced." Unity and Baidu collaborate for simulating the development of autonomous vehicles Unity 2018.3 is here with improved Prefab workflows, Visual Effect graph and more Unity ML-Agents Toolkit v0.6 gets two updates: improved usability of Brains and workflow for Imitation Learning
Read more
  • 0
  • 0
  • 3025

article-image-electronic-arts-ea-announces-project-atlas-a-futuristic-cloud-based-ai-powered-game-development-platform
Natasha Mathur
02 Nov 2018
4 min read
Save for later

Electronic Arts (EA) announces Project Atlas, a futuristic cloud-based AI powered game development platform

Natasha Mathur
02 Nov 2018
4 min read
Electronic Arts (EA) announced Project Atlas, a new AI-powered, cloud computing based game development platform, earlier this week. Project Atlas comes with high-quality LIDAR Data, improved scalability, cloud-based engine, and enhanced security among others. Information regarding when the general availability of Project Atlas hasn’t been disclosed yet. “We’re calling this Project Atlas and we believe in it so much that we have over 1,000 EA employees working on building it every day, and dozens of studios around the world contributing their innovations, driving priorities, and already using many of the components” mentioned Ken Moss, Chief Technology Officer at Electronic Arts. Let’s discuss the features of Project Atlas. High-quality LIDAR Data Project Atlas will be using high-quality LIDAR data about real mountain ranges. This data will then be further passed through a deep neural network which has been trained to create terrain-building algorithms. With the help of this AI-assisted terrain generation, designers will be able to generate not just a single mountain, but a series of mountains along with the surrounding environment to bring the realism of the real world. “This is just one example of dozens or even hundreds where we can apply advanced technology to help game teams of all sizes scale to build bigger and more fun games,” says Moss. Improved Scalability Earlier, all simulation or rendering of in-game actions used to be limited to either the processing performance of the player’s console or to a single server that would interact with your system. But, now, with the help of the cloud, players will have the ability to tap into a network of many servers, that are dedicated to computing complex tasks. This will deliver hyper-realistic destruction within new HD games, that would be highly indistinguishable from real life. ”We’re working to deploy that level of gaming immersion on every device”, says Moss. Moreover, the integration of distributed networks at the rendering level means infinite scalability from the cloud. So, whether you’re on a team of 500 or just 5, you’ll now be able to scale games and create immersive experiences, in unprecedented ways. Cloud-based engine and Moddable asset database Now with Project Atlas, you can turn your own vision into reality, and share the creation with your friends as well as the whole world. You can also market your ideas and visions to the community. Keeping this in mind, Project Atlas team is planning on having a cloud-enabled engine that’ll be able to seamlessly integrates different services. Along with a moddable asset database, there’ll also be a common marketplace so that users can for share and rate other players’ creations. “Players and developers want to create. We want to help them. By blurring the line between content producers and players, this will truly democratize the game experience” adds Moss. Enhanced Security Project Atlas comes with a unified platform, where game makers have the ability to seamlessly deploy security measures such as SSL certificates, configuration, appropriate encryption of data, and zero-downtime patches for every feature from a single secure source. This will allow them to focus more on creating games and less on taking the required security measures. “We’re solving for some of the manually intensive demands by bringing together AI capabilities in an engine and cloud-enabled services at scale. With an integrated platform that delivers consistency and seamless delivery from the game, game makers will free up time, brainspace, and energy for the creative pursuit”, says Moss. For more information, check out the official Project Atlas blog. Xenko 3.0 game engine is here, now free and open-source Meet yuzu – an experimental emulator for the Nintendo Switch AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior
Read more
  • 0
  • 0
  • 2918
article-image-us-labor-organization-afl-cio-writes-an-open-letter-to-game-developers-urging-them-to-unionize-for-fair-treatment-at-work
Natasha Mathur
18 Feb 2019
3 min read
Save for later

US Labor organization, AFL-CIO writes an open letter to game developers, urging them to unionize for fair treatment at work

Natasha Mathur
18 Feb 2019
3 min read
The American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), the largest labour organization in the United States, published an open letter on Kotaku, a video game website and blog, last week. The letter urges the video game industry workers to unionize and voice their support for better treatment within the workplace. The letter is from secretary-treasurer Liz Shuler and this is the first time when AFL-CIO has made a public statement about unionizing game developers. Shuler talks about the struggles of game developers and the unfair treatment that they go through in terms of work conditions, job instability, and inadequate pay in the letter.  Shuler mentions that although U.S. video game sales reached $43 billion in 2018 ( which is 3.6 times larger than the film industry’s record-breaking box office) and is a “stunning accomplishment” for the game developers, they are still not getting the respect that they deserve.   “You’ve built new worlds, designed new challenges and ushered in a new era of entertainment. Now it’s time for industry bosses to start treating you with hard-earned dignity and respect”, writes Shuler. She mentions that game developers often work for outrageous hours in a stressful and toxic work condition, unable to ask for better due to the fear of losing their jobs. She gives an example of developers at Rockstar Games who shared their experiences of  “crunch time” (when the pressure to succeed is extreme) lasting months and sometimes even years to meet the unreal demands from management and deliver a game that made their bosses earn $725 million in its first three days. “They get rich. They get notoriety. They get to be crowned visionaries and regarded as pioneers. What do you get?”, writes Shuler. According to Shuler, this is a moment for change and change will come when developers come together as a strong union by using their “collective voice” to ask for a “fair share of wealth” that the game developers create every day. She writes that the CEOs and the bosses would treat the developers right only when they stand together and demand it. “You have the power to demand a stake in your industry and a say in your economic future. Whether we’re mainlining caffeine in Santa Monica, clearing tables in Chicago or mining coal in West Virginia, we deserve to collect nothing less than the full value of our work”, states Shuler. Public reaction to the news is mostly positive, with some people calling out for a better and stronger alternative than unions: https://twitter.com/kwertzy/status/1096471380357349376 https://twitter.com/getglitched/status/1096499209719685120 https://twitter.com/moesidegaming/status/1096666233011871744 https://twitter.com/legend500/status/1096571646805188608 https://twitter.com/turnageb/status/1096481116763107328 Check out the complete letter here. Open letter from Mozilla Foundation and other companies to Facebook urging transparency in political ads Google TVCs write an open letter to Google’s CEO; demands for equal benefits and treatment The cruelty of algorithms: Heartbreaking open letter criticizes tech companies for showing baby ads after stillbirth
Read more
  • 0
  • 0
  • 2893

article-image-unity-2018-3-is-here-with-improved-prefab-workflows-visual-effect-graph-and-more
Sugandha Lahoti
14 Dec 2018
3 min read
Save for later

Unity 2018.3 is here with improved Prefab workflows, Visual Effect graph and more

Sugandha Lahoti
14 Dec 2018
3 min read
Yesterday, the team at Unity released the next update of Unity for 2018. Unity 2018.3 has been released with improved Prefab workflows, Visual Effect Graph (Preview), and an updated Terrain System along with more than 2000 new features, fixes, and improvements. Improved Prefab Workflows Prefabs workflows have been improved in Unity 2018.3 with a focus on reusability, control, and safety. These updates implement support for nesting and make it safer for more efficient to work with Prefabs for teams in all sizes. Nested Prefabs make it easier for teams of all sizes to: Split up Prefabs into multiple entities for greater efficiency Reuse any content, from small to large Work on different parts of content simultaneously Visual Effect Graph (Preview) The Visual effect graph will make it easy for artists to create stand-out VFX for games and other projects in real-time. Developers can create simple and complex effects, with this tool. It also includes an API for creating custom nodes to meet the needs of advanced creators. Updated Terrain System The updated terrain tools provide developers with better performance and improved usability. Operations are shifted over to the GPU which helps creators get faster access to faster tools, larger brush sizes, improved previews, and the ability to paint Terrain tile borders with automatic seam-stitching. Improvements have been made to support the High Definition Render Pipeline (HDRP) and the Lightweight Render Pipeline (LWRP). FPS Sample Project Unity 2018.3 comes with FPS Sample which gives game developers source code access to a connected multiplayer FPS experience. They can download the sample and use it as a starting point to learn the latest technologies such as HDRP, or for their next connected game. In Unity 2018.3 significant improvements have been made to how Timeline Animation Tracks handle animations on the root transform of a hierarchy. This includes new Track Offset modes, Adapts to scale, improved editor preview, and depreciation of Root Motion. Mobile improvements include Dynamic Resolution Scaling support for Vulkan and Metal, Android AppBundle generation support and faster APK package build times on Android with APKzlib. Plans for 2019 For 2019 they are planning to release new announcements and innovations These include a new MegaCity Demo to showcase Unity’s approach to Data-Oriented Design. Another demo, Cinecast (Experimental) is an AI cinematography system that enables the creation of movie-like cinematic sequences from gameplay, in real-time. Project MARS is an extension for Unity that helps developers build applications that intelligently interact with any real-world environment, with little-to-no custom coding. 2019 will also see the preview of Project Tiny, which is Unity’s new, highly-modular runtime and Editor mode that enables the creation of small, light, and fast instant games and experiences. Find the full list of Unity 2018.3 features on the Unity blog. Unity introduces guiding Principles for ethical AI to promote responsible use of AI. Unity has won the Technology and Engineering Emmy Award for excellence in engineering creativity What you should know about Unity 2018 Interface
Read more
  • 0
  • 0
  • 2880