Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-graphql-api-is-now-generally-available
Amrata Joshi
17 Jul 2019
3 min read
Save for later

GraphQL API is now generally available

Amrata Joshi
17 Jul 2019
3 min read
Last month, the team at Fauna, provider of FaunaDB, the cloud-first database announced the general availability of its GraphQL API, a query language for APIs. With the support for GraphQL, FaunaDB now provides cloud database services in the market and allows developers to use any API of choice to manipulate all their data. GraphQL also helps developers with their productivity by enabling fast, easy development of serverless applications. It makes FaunaDB the only serverless backend that has support for universal database access. Matt Biilmann, CEO at Netlify, a Fauna partner said, “Fauna’s GraphQL support is being introduced at a perfect time as rich, serverless apps are disrupting traditional development models.” Biilmann added, “GraphQL is becoming increasingly important to the entire developer community as they continue to leverage JAMstack and serverless to simplify cloud application development. We applaud Fauna’s work as the first company to bring a serverless GraphQL database to market.” GraphQL helps developers in specifying the shape of the data they need without requiring changes to the backend components that provide data. The GraphQL API in FaunaDB helps teams in collaborating smoothly and allows back-end teams to focus on security and business logic, and helps front-end teams to concentrate on presentation and usability.  In 2017, the global serverless architecture market was valued at $3.46 billion in 2017 and is expected to reach $18.04 billion by 2024 as per the Zion Research. GraphQL brings growth and development to serverless development so developers can look for back-end GraphQL support like the one found in FaunaDB. The GraphQL API also supports three general functions: Queries, Mutations, and Subscriptions and currently, FaunaDB natively supports Queries and Mutations.  FaunaDB's GraphQL API provides developers with uniform access to transactional consistency, quality of service (QoS), user authorization, data access, and temporal storage. No limits on data history FaunaDB is the only database that provides support without any limits on data history. Any API such as SQL in FaunaDB can return data at any given time. Consistency FaunaDB provides the highest consistency levels for its transactions that are automatically applied to all APIs. Authorization FaunaDB provides access control at the row level which is applicable to all APIs, be it GraphQL or SQL. Shared data access It also features shared data access, so the data which is written by one API (e.g., GraphQL) can be read and modified by another API such as FQL.  To know more about the news, check out the press release. 7 reasons to choose GraphQL APIs over REST for building your APIs Best practices for RESTful web services : Naming conventions and API Versioning [Tutorial] Implementing routing with React Router and GraphQL [Tutorial]
Read more
  • 0
  • 0
  • 3772

article-image-duckduckgo-upgrades-apple-maps-integration-for-enhanced-map-search
Bhagyashree R
17 Jul 2019
3 min read
Save for later

DuckDuckGo upgrades its Apple Maps integration for an enhanced map search

Bhagyashree R
17 Jul 2019
3 min read
Earlier this year, DuckDuckGo announced that its map and address-related searches are now powered by Apple's MapKit JavaScript framework. This enabled them to offer improved address searches, additional visual features, enhanced satellite imagery, and better privacy. Since then the company has further expanded the use of Apple Maps for enhanced search while adhering to its commitment to user privacy, according to a blog post shared yesterday. https://twitter.com/DuckDuckGo/status/1151166280088657921 Here are some of the map-related search enhancements in DuckDuckGo: Map re-querying Previously, for every new map-related search you were redirected to the regular DuckDuckGo Search page. Now, it will allow you to stay in its expanded map view to refine local searches instantly. Additionally, when you move around the map or zoom in and out, the search results will be updated to include places within the field of view. Source: DuckDuckGo Intelligent autocompletion To make searching much easier and faster, the search engine now provides intelligent auto-completion within the expanded map view. As you type or update a new search query, DuckDuckGo will dynamically show search suggestions that are tailored to the local region displayed. A dedicated Maps tab Similar to Google, you will see a dedicated Maps tab in DuckDuckGo at the top of every search results page. Previously, the Maps tab was shown only for map-related searches, but from now on you will consistently see the tab alongside Images, Videos, and News. So, if you search for “cupcakes” and go to the Maps tab you will see local cupcake places. “Privacy by design, without any tradeoffs” Along with these great enhancements, DuckDuckGo is also promising for stricter user privacy. “A lot has changed with using maps on DuckDuckGo making it an even smoother experience, but what hasn’t changed is the way we handle your data—or rather, the way we don’t do anything with your data. We are making local searches faster while retaining the privacy you expect,” the post reads. It further emphasized that they do not share any personally identifiable information such as IP address and also make it point to discard any such information immediately after use. It is great that DuckDuckGo is expanding the use of Apple Maps along with promising better privacy to its users. Many users appreciated this update and believe that this is the right step towards becoming “a worthy competitor to Google Search.” Others said that Apple Maps is way behind Google Maps. Users have experienced that Apple Maps’ quality seems to highly depend on the user location. A user shared his experience saying, “I’ve used Apple Maps as my primary map since it came out, and I’ve only gotten a wrong location one time in literally thousands of searches, and that was years ago. It wasn’t really ready when it launched, but it has gotten consistently better over time.” He further added, “The UX is great, in many cases, the satellite imagery is more up-to-date compared to Google, and it doesn’t maul my battery to use. Not saying it’s clearly better than Google, because it isn’t, but for my usage, it’s more than “good enough,” and I love to see Apple’s privacy-respecting products compete effectively with big G.” Read the full announcement on DuckDuckGo’s official website. Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher DuckDuckGo proposes “Do-Not-Track Act of 2019” to require sites to respect DNT browser setting DuckDuckGo now uses Apple MapKit JS for its map and location-based searches
Read more
  • 0
  • 0
  • 1346

article-image-cloudflare-rca-major-outage-was-a-lot-more-than-a-regular-expression-went-bad
Savia Lobo
16 Jul 2019
3 min read
Save for later

Cloudflare RCA: Major outage was a lot more than “a regular expression went bad”

Savia Lobo
16 Jul 2019
3 min read
On July 2, 2019, Cloudflare suffered a major outage due to a massive spike in CPU utilization in the network. Ten days after the outage, on July 12, Cloudflare’s CTO John Graham-Cumming, has released a report highlighting the details about how the Cloudflare service went down for 27 minutes. During the outage, the company speculated the reason to be a single misconfigured rule within the Cloudflare Web Application Firewall (WAF), deployed during a routine deployment of new Cloudflare WAF Managed rules. This speculation turns out to be true and caused CPUs to become exhausted on every CPU core that handles HTTP/HTTPS traffic on the Cloudflare network worldwide. Graham-Cumming said they are “constantly improving WAF Managed Rules to respond to new vulnerabilities and threats”. The CPU exhaustion was caused by a single WAF rule that contained a poorly written regular expression that ended up creating excessive backtracking. Source: Cloudflare report The regular expression that was at the heart of the outage is : Graham-Cumming says Cloudflare deploys dozens of new rules to the WAF every week, and also have numerous systems in place to prevent any negative impact of that deployment. He shared a list of vulnerabilities that caused the major outage. What’s Cloudflare doing to mend the situation? Graham-Cumming said they had stopped all release work on the WAF completely and are following some processes: He says, for longer-term, Cloudflare is “moving away from the Lua WAF that I wrote years ago”. The company plans to port the WAF to use the new firewall engine, which provides customers the ability to control requests, in a flexible and intuitive way, inspired by the widely known Wireshark language. This will make the WAF both faster and add yet another layer of protection. Users have appreciated Cloudflare’s efforts in taking immediate calls for the outage and being completely transparent about the root cause of it with a complete post mortem report. https://twitter.com/fatih/status/1150014793253904386 https://twitter.com/nealmcquaid/status/1150754753825165313 https://twitter.com/_stevejansen/status/1150928689053470720 “We are ashamed of the outage and sorry for the impact on our customers. We believe the changes we’ve made mean such an outage will never recur,” Graham-Cumming writes. Read the complete in-depth report by Cloudflare on their blog post. How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Cloudflare adds Warp, a free VPN to 1.1.1.1 DNS app to improve internet performance and security Cloudflare raises $150M with Franklin Templeton leading the latest round of funding
Read more
  • 0
  • 0
  • 3160

article-image-amazon-workers-protest-on-its-prime-day-demand-a-safe-work-environment-and-fair-wages
Fatema Patrawala
16 Jul 2019
6 min read
Save for later

Amazon workers protest on its Prime day, demand a safe work environment and fair wages

Fatema Patrawala
16 Jul 2019
6 min read
While people all over the globe will be splurging at the Amazon’s most awaited Prime Day sale, the employees of this e-commerce site are protesting at multiple sites across the globe demanding better working conditions among others. Workers at the Amazon warehouse in Shakopee, Minnesota, went on a six-hour work stoppage on its Prime day sale. As per reports from the BBC news, 2000 Amazon workers in Germany were on strike yesterday. And in the UK week long protests are planned. Amazon started offering this sale five years ago, to its Prime customers who pay subscription fees in exchange of deep discounts on a range of products available on the site, with free shipping, next day delivery and other perks involved. Bloomberg reports that, Willaim Stolz, one of the employees at Minnesota organizing the strike said, “Amazon is going to be telling one story about itself, which is they can ship a Kindle to your house in one day, isn’t that wonderful! But we want to take the opportunity to talk about what it takes to make that work happen and put pressure on Amazon to protect us and provide safe, reliable jobs.” He says he has to pick an item about every eight seconds, or 332 per hour, for a 10 hour day. "The speeds that we have to work are very physically and mentally exhausting, in some cases leading to injuries," he said. "Basically we just want them to treat us with respect as human beings and not treat us like machines," he said. Further Bloomberg also reported that Minnesota warehouse had become central to Amazon worker activism. There were talks between the employees and management to reduce workloads during Ramadan and designate a conference room as a prayer space. But according to the workers Amazon has failed to meet these demands and the company terminates employees who do not meet the productivity quotas. In a letter last year to the National Labor Relations Board reported by The Verge, an attorney for Amazon said that hundreds of employees at one Baltimore facility were terminated within about a year for failing to meet productivity rates. In May, the Washington Post had published a detailed report on how Amazon had gamified and made the productivity goals dynamic for its workers. Gamification generally refers to software programs that simulate video games by offering rewards, badges or bragging rights among the workers. The Amazon workers at its warehouses need to complete various tasks in order to earn these reward points. While the protest was planned by Amazon warehouse workers, in an effort to show solidarity a few of the white collars Amazon engineers flew to Minnesota to join the protest. They are demanding the company take action against climate change, ease quotas, and make more temp employees permanent. https://twitter.com/histoftech/status/1148348541678604288 ”’We’re both fighting for a livable future,’ said a Seattle software engineer. “It’s the latest example of tech employees with very different jobs trying to forge common cause in the hopes their bosses find their demands harder to ignore.” In May, Amazon shareholders rejected 11 resolutions put forward by the employees which included Amazon’s controversial facial recognition technology, demands for more action on climate change, salary transparency, and other equity issues. Tyler Hamilton, who works at the Shakopee warehouse, said he hoped that consumers would remember that there were people behind the packages that show up at their doors, often less than 48 hours after placing an order. "We are the faces behind the boxes," Hamilton said. "The little smiley face that comes with every package, not everyone in there smiles all the time. It can be rough sometimes. And, you should think about that when you order it." In Germany, Amazon employs 20,000 people. Labour union Verdi said more than 2,000 workers at seven sites had gone on strike under the logo "no more discount on our incomes". "While Amazon fuels bargain hunting on Prime Day with hefty discounts, employees are being deprived of a living wage," said Orhan Akman, retail specialist at Verdi. In the UK, GMB union officials handed leaflets to workers arriving at the site in Peterborough in the East Midlands, and in the coming days protests are expected at other sites such as Swansea and Rugeley, in the West Midlands. Mick Rix, GMB national officer, said "Amazon workers want Jeff Bezos to know they are people not robots. It's prime time for Amazon to get round the table with GMB and discuss ways to make workplaces safer and to give their workers and independence voice". https://twitter.com/GMB_union/status/1148658030487232515 Amazon in response to this said that it "provided great employment opportunities with excellent pay". The company has encouraged people to compare its operations in Shakopee with other employers in the area. In the UK, where it employs 29,500 people, a spokesperson said the company offered industry-leading pay starting at £9.50 per hour and was the "employer of choice for thousands of people across the UK". It said its German operations offered wages "at the upper end of what is paid in comparable jobs" and it was "seeing very limited participation across Germany with zero operational impact and therefore no impact on customer deliveries". The planned strike has caught the attention of politicians. Democratic presidential candidates Elizabeth Warren and Bernie Sanders both offered public support on social media for the strike. "I fully support Amazon workers' Prime Day strike," Warren said in a tweet. "Their fight for safe and reliable jobs is another reminder that we must come together to hold big corporations accountable." https://twitter.com/ewarren/status/1150760629583712257 "I stand in solidarity with the courageous Amazon workers engaging in a work stoppage against unconscionable working conditions in their warehouses," Sanders tweeted. "It is not too much to ask that a company owned by the wealthiest person in the world treat its workers with dignity and respect." Amazon S3 is retiring support for path-style API requests; sparks censorship fears Amazon employees get support from Glass Lewis and ISS on its resolution for Amazon to manage climate change risks Amazon rejects all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting
Read more
  • 0
  • 0
  • 1908

article-image-llvms-arm-stack-protection-feature-turns-ineffective-when-the-stack-is-re-allocated
Vincy Davis
16 Jul 2019
2 min read
Save for later

LLVMs Arm stack protection feature turns ineffective when the stack is re-allocated

Vincy Davis
16 Jul 2019
2 min read
A vulnerability in the stack protection feature in LLVM's Arm backend becomes ineffective when the stack protector slot is re-allocated. This was notified as a vulnerability note in the Software Engineering Institute of the CERT Coordination Center. The stack protection feature is optionally used to protect against buffer overflows in the LLVM Arm backend. A cookie value is added between the local variables and the stack frame return address to make this feature work. After storing this value in memory, the compiler checks the cookie with the LocalStackSlotAllocation function. The function checks if the value has been changed or overwritten. It is terminated if the address value is found to be changed.  If a new value is allocated later on, the stack protection becomes ineffective as the new stack protector slot appears only after the local variables which it is supposed to protect. It is also possible that the value gets overwritten by the stack cookie pointer. This happens when the stack protection feature is rendered ineffective.  When the stack protection feature becomes ineffective, the function becomes vulnerable to stack-based buffer overflow. This can cause the return address to be changed or the cookie to be overwritten itself, thus causing an unintended value to be passed through the check. The proposed solution for the stack vulnerability is to apply the latest updates from both the LLVM and Arm. This year saw many cases of buffer overflow vulnerabilities. In the June release of VLC 3.0.7, many security issues were resolved. One of the high security issues resolved was about the stack buffer overflow in the RIST Module of VLC 4.0.  LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces Google proposes a libc in LLVM, Rich Felker of musl libc thinks it’s a very bad idea Introducing InNative, an AOT compiler that runs WebAssembly using LLVM outside the Sandbox at 95% native speed
Read more
  • 0
  • 0
  • 1899

article-image-epic-games-grants-blender-1-2-million-in-cash-to-improve-the-quality-of-their-software-development-projects
Vincy Davis
16 Jul 2019
4 min read
Save for later

Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects

Vincy Davis
16 Jul 2019
4 min read
Yesterday, Epic Games announced that it is awarding Blender Foundation $1.2 million in cash spanning three years, to accelerate the quality of their software development projects. Blender is a free and open-source 3D creation suite which supports a full range of tools to empower artists to create 3D graphics, animation, special effects or games. Ton Roosendaal, founder, and chairman of Blender Foundation thanked Epic Games in a statement. He said “Thanks to the grant we will make a significant investment in our project organization to improve on-boarding, coordination and best practices for code quality. As a result, we expect more contributors from the industry to join our projects.” https://twitter.com/tonroosendaal/status/1150793424536313862 The $1.2 million grant from Epic is part of their $100 million MegaGrants program which was announced this year in March. Tim Sweeney, CEO of Epic Games had announced that Epic will be offering $100 million in grants to game developers to boost the growth of the gaming industry by supporting enterprise professionals, media and entertainment creators, students, educators, and tool developers doing excellent work with Unreal Engine or enhancing open-source capabilities for the 3D graphics community. Sweeney believes that open tools, libraries, and platforms are critical to the future of the digital content ecosystem. “Blender is an enduring resource within the artistic community, and we aim to ensure its advancement to the benefit of all creators”, he adds. This is the biggest award announced by Epic so far. Blender has no obligation to use or promote Epic Games’ storefront or engine considering this is a pure generous offer by Epic Games with “no strings attached”. In April, Magic Leap revealed that the company will provide 500 Magic Leap One Creator Edition spatial computing devices for giveaway as part of Epic MegaGrants program. Blender users are appreciative of the support and generosity of Epic Games. https://twitter.com/JeannotLandry/status/1150812155412963328 https://twitter.com/DomAnt2/status/1150798726379839488 A Redditor comments, “There's a reason Epic as a company has an extremely positive reputation with people in the industry. They've been doing this kind of thing for years, and a huge amount of money they're making from Fortnite is planned to be turned into grants as well.  Say what you want about them, they are without question the top company in gaming when it comes to actually using their profits to immediately reinvest/donate to the gaming industry itself. It doesn't hurt that every company who works with them consistently says that they're possibly the very best company in gaming to work with.” A comment on Hacker News read, “Epic are doing a great job improving fairness in the gaming industry, and the economic conditions for developers. I'm looking forward to their Epic Store opening up to more (high quality) Indie games.” In 2015, Epic had launched Unreal Dev Grants offering a pool of $5 million to independent developers with interesting projects in Unreal Engine 4 to fund the development of their projects. In December 2018, Epic had also launched an Epic game store where developers will get 88% of the earned revenue. The large sum donation of Epic to Blender holds more value considering the highly anticipated release of Blender 2.8 is around the corner. Though its release candidate is already out, users are quite excited for its stable release. Blender 2.8 will have new 3D viewport and UV editor tools to enhance users gaming experience. With Blender aiming to increase its quality of projects, such grants from major game publishers will only help them get bigger. https://twitter.com/ddiakopoulos/status/1150826388229726209 A user on Hacker News comments, “Awesome. Blender is on the cusp of releasing a major UI overhaul (2.8) that will make it more accessible to newcomers (left-click is now the default!). I'm excited to see it getting some major support from the gaming industry as well as the film industry.” What to expect in Unreal Engine 4.23? Epic releases Unreal Engine 4.22, focuses on adding “photorealism in real-time environments” Blender celebrates its 25th birthday!
Read more
  • 0
  • 0
  • 5041
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-eus-satellite-navigation-system-galileo-suffers-major-outage-nears-100-hours-of-downtime
Savia Lobo
16 Jul 2019
3 min read
Save for later

EU's satellite navigation system, Galileo, suffers major outage; nears 100 hours of downtime

Savia Lobo
16 Jul 2019
3 min read
Europe’s satellite navigation system, Galileo, is suffering a major outage since July 11, nearing 100 hours of downtime, due to a “technical incident related to its ground infrastructure”, according to the European GNSS (Global Navigation Satellite System) Agency or GSA. Funded by the EU, the Galileo program went live with initial services in December 2016 after 17 years of development. This program was launched to avoid the EU’s reliance on the US Air Force's Global Positioning System (GPS) for commercial, military and other applications like guiding aircraft, and also on Russian government's GLONASS. The Galileo satellite network is presently being used by satnavs, financial institutions and more. It provides both free and commercial offerings and is widely used by government agencies and private companies for navigation and search and rescue operations. GSA’s service status page highlights that 24 of the 26 Galileo satellites are listed as "not usable," while the other two are listing the status of "testing". Source: ZDNet The outage means the satellites may not be able to provide timing or positioning data to smartphones or other devices in Europe that use the system. According to BBC, all of the affected users will hardly notice the outage as their devices “will be relying instead on the data coming from the American Global Positioning System (GPS). They will also depend on the sat-nav chip they have installed, cell phones and other devices might also be making connections with the Russian (Glonass) and Chinese (Beidou) networks”. On July 11, the GSA released an advisory notifying users that the Galileo satellite signals “may not be available nor meet the minimum performance levels”. They also warned users that these systems “should be employed at users’ own risk”. On Saturday, July 13, the GSA warned users Another stern warning by the GSA said the Galileo was experiencing a full-service outage and that "signals are not to be used." On July 14, GSA said the outage affected only the Galileo navigational and satellite-based timing services. However, "the Galileo Search and Rescue (SAR) service -- used for locating and helping people in distress situations for example at sea or mountains -- is unaffected and remains operational." “Experts are working to restore the situation as soon as possible. An Anomaly Review Board has been immediately set up to analyze the exact root cause and to implement recovery actions”, GSA added. “Galileo is still in a roll-out, or pilot phase, meaning it would not yet be expected to lead critical applications”, BBC reports. A GSA spokesperson told BBC News, "People should remember that we are still in the 'initial services' phase; we're not in full operation yet”. However, according to Inside GNSS, a specialist sat-nav site, the problem may be with the Precise Timing Facility(PTF), a ground station in Italy that gives each satellite in the system an accurate time reference. “time has an impact on the whole constellation!”, Inside GNSS adds. According to ZDNet, “The downtime also comes after widespread GPS outages were reported across Israel, Iran, Iraq, and Syria at the end of June. Israeli media blamed the downtime on Russian interference, rather than a technical problem”. https://twitter.com/planet4589/status/1150638285640912897 https://twitter.com/aallan/status/1150427275231420417 https://twitter.com/LeoBodnar/status/1150338536517881856 To know more about this news in detail, head over to Europe GSA’s official blog post. Twitter experienced major outage yesterday due to an internal configuration issue Stripe’s API suffered two consecutive outages yesterday causing elevated error rates and response times Why did Slack suffer an outage on Friday?
Read more
  • 0
  • 0
  • 2795

article-image-meredith-whittaker-google-walkout-organizer-and-ai-ethics-researcher-is-leaving-the-company-adding-to-its-brain-drain-woes-over-ethical-concerns
Sugandha Lahoti
16 Jul 2019
4 min read
Save for later

Meredith Whittaker, Google Walkout organizer, and AI ethics researcher is leaving the company, adding to its brain-drain woes over ethical concerns

Sugandha Lahoti
16 Jul 2019
4 min read
Meredith Whittaker who played a major role in Google’s Walkout last year is leaving the company amid facing retaliation at work. The news was disclosed when a software engineer at Google posted a tweet about her last day. https://twitter.com/thegreenfrog611/status/1150859347766833152   A Google spokeswoman also confirmed Whittaker’s departure to Bloomberg. However, Whittaker has not yet shared the news on her Twitter account. Last year in November, a global Google Walkout for Real Change was organized by Claire Stapleton, Meredith Whittaker and six other employees at the company. It prompted 20,000 Google employees and contractors to walk off the job opposing the company’s handling of sexual harassment allegations. In April, Stapleton and Whittaker accused the company of retaliation against them over last year’s Google Walkout protest. Both their roles changed dramatically including calls to abandon AI ethics work, demotion, and more. After the announcement of Google disbanding it’s AI Ethics council, Whittaker said, she was informed that to remain at the company she will have to abandon her work on AI ethics and the AI Now Institute. She said that her manager told her in late December she would likely need to leave Google’s Cloud division. The same manager told her in March that the “Cloud division was seeking more revenue and that AI Now and her AI ethics work was no longer a fit. This was a strange request because the Cloud unit has a team working on ethical concerns related to AI.” Similar retaliation was faced by Stapleton, who was told she would be demoted from her role as marketing manager at YouTube. “My manager started ignoring me, my work was given to other people, and I was told to go on medical leave, even though I’m not sick,” Following continuous counter-attacks, Stapelton was prompted to resign from her position last month. https://twitter.com/clairewaves/status/1137002800053985280 Whittaker had then tweeted in her support. https://twitter.com/mer__edith/status/1137006840313548801 Whittaker had signed the petition protesting Google’s infamous Project Dragonfly, the secretive search engine that Google is allegedly developing which will comply with the Chinese rules of censorship. Meredith Whittaker was also a leader in the anti-Maven movement. Google’s Project Maven, was focused on analyzing drone footage and could have been eventually used to improve drone strikes on the battlefield. More than 3,000 Google employees signed a petition against this project that led to Google deciding not to renew its contract with the U.S. Department of Defense in 2019. Google announced in June it would not renew the contract. Whittaker tweeted at the time that she was “incredibly happy about this decision, and have a deep respect for the many people who worked and risked to make it happen. Google should not be in the business of war.” People have commented on how Meredith's departure will only intensify activism at Google. “The impact @mer__edith has in AI ethics is second to none. What happens to her at Google will be a gauge for the wellbeing of the entire field. Watch closely,” Moritz Hardt, Assistant Professor of Electrical Engineering and Computer Science, Berkeley University https://twitter.com/mrtz/status/1121110692843507712 https://twitter.com/Kantrowitz/status/1150992543691108352 Liz Fong-Jones, Xoogler, who left Google over ethical concerns earlier this year, tweeted about the number of Google Walkout and other organizing leaders that have left the company. There are five who have left, Claire Stapleton, Meredith Whittaker, Liz Fong-Jones, Celie O'Neil-Hart, and Erica Anderson. https://twitter.com/lizthegrey/status/1150960547803860993 Google rejects all 13 shareholder proposals at its annual meeting, despite protesting workers Google Walkout organizer, Claire Stapleton resigns after facing retaliation from management #NotOkGoogle: Employee-led town hall reveals hundreds of stories of retaliation at Google
Read more
  • 0
  • 0
  • 1746

article-image-volkswagen-under-its-new-self-driving-vehicle-alliance-with-ford-invests-2-6-billion-in-argo-ai
Bhagyashree R
15 Jul 2019
3 min read
Save for later

Volkswagen, under its new self-driving vehicle alliance with Ford, invests $2.6 billion in Argo AI

Bhagyashree R
15 Jul 2019
3 min read
After ending its ties with self-driving developer Aurora earlier this year, Volkswagen on Friday disclosed that it is now investing $2.6 billion in Ford’s autonomous-car partner, Argo AI. This deal, that values the operation at more than $7 billion, is part of a broader alliance between Volkswagen and Ford that covers autonomous and electric vehicles. “While Ford and Volkswagen remain independent and fiercely competitive in the marketplace, teaming up and working with Argo AI on this important technology allows us to deliver unmatched capability, scale, and geographic reach,” Ford Chief Executive Officer Jim Hackett said. Under this alliance, Ford and Volkswagen joining forces to take advantage of each other’s strengths. While Ford is ahead of Volkswagen in the autonomous driving field, Volkswagen is more advanced than Ford in electric cars. Volkswagen plans to merge its Munich-based subsidiary, Autonomous Intelligent Driving (AID) including its 200 employees and the intellectual property they’ve developed into Argo. Argo AI, that was founded in 2016, boasts of 500 employees and with this merge, it will increase to 700. Before coming to Argo AI, Bryan Salesky, its chief executive worked for Google. He believes that this deal will help his company scale. He, in a Reuters interview, said, “We have two great customers and investors who are going to help us really scale and are committed to us for the long term.” He further said that Argo is open to additional strategic or financial investors to help share the costs of bringing self-driving vehicles to market. “We all realize this is a time-, talent- and capital-intensive business,” he said. Ford and VW, Argo’s two investors will hold equal, minority stake in the startup that together make up a majority. Like its employee count, Argo’s board will also expand from five to seven members. This investment looks promising for Volkswagen, as it opens an opportunity to catch up with Alphabet Inc.’s Waymo, and General Motors Co.’s Cruise unit. Given that this field is so resource-intensive it makes sense to make alliances to achieve the goal. Ferdinand Dudenhöffer, a professor at the University of Duisburg-Essen in Germany, said in an email to the New York Times, “Autonomous driving is a very, very expensive technology. One has to invest today in order to make the first sales in 2030, maybe. Therefore it makes a lot of sense for Ford and VW to work together.” Alphabet’s Waymo to launch the world’s first commercial self driving cars next month Apple gets into chip development and self-driving autonomous tech business Tesla Autonomy Day takeaways: Full Self-Driving computer, Robotaxis launching next year, and more
Read more
  • 0
  • 0
  • 1229

article-image-stripes-api-degradation-rca-found-unforeseen-interaction-of-database-bugs-and-a-config-change-led-to-cascading-failure-across-critical-services
Vincy Davis
15 Jul 2019
4 min read
Save for later

Stripe’s API degradation RCA found unforeseen interaction of database bugs and a config change led to cascading failure across critical services

Vincy Davis
15 Jul 2019
4 min read
On 10th July, Stripe’s API services went down twice, from 16:36–17:02 UTC and again from 21:14–22:47 UTC. Though the services recovered immediately, it had caused significantly elevated error rates and response times. Two days after the incident, i.e., on 12th July, Stripe has shared a root cause analysis on the repeated degradation, as requested by the users. David Singleton, Stripe CTO describes the summary of API failures as  “two different database bugs and a configuration change interacted in an unforeseen way, causing a cascading failure across several critical services.” What was the cause of Stripe’s first API degradation? Three months ago, Stripe had upgraded to a new minor version and had performed the necessary testing to maintain a quality assured environment. This included executing a phased production rollout with the less critical as well as the increasingly critical clusters. Though it operated properly for the first three months, on the day of the event, it failed due to the presence of multiple stalled nodes. This occurred due to a shard, which was unable to elect a new primary state. [box type="shadow" align="" class="" width=""]“Stripe splits data by kind into different database clusters and by quantity into different shards. Each cluster has many shards, and each shard has multiple redundant nodes.”[/box] As the shard was used widely, its unavailability caused the compute resources for the API to starve and thus resulted in a severe degradation of the API services. The Stripe team detected  the failed election within a minute and started incident response within two minutes. The team forced the election of a new primary state, which led to restarting the database cluster. Thus, 27 minutes after the degradation, the Stripe API fully recovered. What caused Stripe’s API to degrade again? Once the Stripe’s API recovered, the team started investigating the root cause of the first degradation. They identified a code path in the new version of the database’s election protocol and decided to revert back to the previous known stable version for all the shards of the impacted cluster. This was deployed within four minutes. Until 21.14 UTC, the cluster was working fine. Later, the automated alerts fired indicating that some shards in the cluster were again unavailable, including the shard implicated in the first degradation. Though the symptoms appeared to be the same, the second degradation was caused due to a different reason. The prior reverted stable version interacted poorly with a configuration change to the production shards. Once the CPU starvation was observed, the Stripe team updated the production configuration and restored the affected shards. Once the shard was verified as healthy, the team began increasing the traffic back up, including prioritizing services as required by user-initiated API requests. Finally, Stripe’s API services were recovered at 22:47 UTC. Remedial actions taken The Stripe’s team has undertaken certain measures to ensure such degradation does not occur in the future An additional monitoring system has been implemented to alert whenever nodes stop reporting replication lag. Several changes have been introduced to prevent failures of individual shards from cascading across large fractions of API traffic. Further, Stripe will introduce more procedures and tooling to increase safety using which operators can make rapid configuration changes during incident response. Reactions to Stripe’s analysis of the API degradation has been mixed. Some users believe that the Stripe team should have focussed more on mitigating the error completely, rather than analysing the situation, at that moment. A Hacker News comment read, “In my experience customers deeply detest the idea of waiting around for a failure case to re-occur so that you can understand it better. When your customers are losing millions of dollars in the minutes you're down, mitigation would be the thing, and analysis can wait. All that is needed is enough forensic data so that testing in earnest to reproduce the condition in the lab can begin. Then get the customers back to working order pronto. 20 minutes seems like a lifetime if in fact they were concerned that the degradation could happen again at any time. 20 minutes seems like just enough time to follow a checklist of actions on capturing environmental conditions, gather a huddle to make a decision, document the change, and execute on it. Commendable actually, if that's what happened.” Few users appreciated Stripe’s analysis report. https://twitter.com/thinkdigitalco/status/1149767229392769024 Visit the Stripe website for a detailed timeline report. Twitter experienced major outage yesterday due to an internal configuration issue Facebook, Instagram and WhatsApp suffered a major outage yesterday; people had trouble uploading and sending media files Cloudflare suffers 2nd major internet outage in a week. This time due to globally deploying a rogue regex rule.
Read more
  • 0
  • 0
  • 3225
article-image-hello-gg-a-new-os-framework-to-execute-super-fast-apps-on-1000s-of-transient-functional-containers
Bhagyashree R
15 Jul 2019
4 min read
Save for later

Hello 'gg', a new OS framework to execute super-fast apps on "1000s of transient functional containers"

Bhagyashree R
15 Jul 2019
4 min read
Last week at the USENIX Annual Technical Conference (ATC) 2019 event, a team of researchers introduced 'gg'. It is an open-source framework that helps developers execute applications using thousands of parallel threads on a cloud function service to achieve near-interactive completion times. "In the future, instead of running these tasks on a laptop, or keeping a warm cluster running in the cloud, users might push a button that spawns 10,000 parallel cloud functions to execute a large job in a few seconds from start. gg is designed to make this practical and easy," the paper reads. At USENIX ATC, leading systems researchers present their cutting-edge systems research. It also gives researchers to gain insight into topics like virtualization, network management and troubleshooting, cloud and edge computing, security, privacy, and more. Why is the gg framework introduced Cloud functions, better known as, serverless computing, provide developers finer granularity and lower latency. Though they were introduced for event handling and invoking web microservices, their granularity and scalability make them a good candidate for creating something called a “burstable supercomputer-on-demand”. These systems are capable of launching a burst-parallel swarm of thousands of cloud functions, all working on the same job. The goal here is to provide results to an interactive user much faster than their own computer or by booting a cold cluster and is cheaper than maintaining a warm cluster for occasional tasks. However, building applications on swarms of cloud functions pose various challenges. The paper lists some of them: Workers are stateless and may need to download large amounts of code and data on startup Workers have limited runtime before they are killed On-worker storage is limited but much faster than off-worker storage The number of available cloud workers depends on the provider's overall load and can't be known precisely upfront Worker failures occur when running at large scale Libraries and dependencies differ in a cloud function compared with a local machine Latency to the cloud makes roundtrips costly How gg works Previously, researchers have addressed some of these challenges. The gg framework aims to address these principal challenges faced by burst-parallel cloud-functions applications. With gg, developers and users can build applications that burst from zero to thousands of parallel threads to achieve low latency for everyday tasks. The following diagram shows its composition: Source: From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers The gg framework enables you to build applications on an abstraction of transient, functional containers that are also known as thunks. Applications can express their jobs in terms of interrelated thunks or Linux containers and then schedule, instantiate, and execute those thunks on a cloud-functions service. This framework is capable of containerizing and executing existing programs like software compilation, unit tests, and video encoding with the help of short-lived cloud functions. In some cases, this can give substantial gains in terms of performance. It can also be inexpensive than keeping a comparable cluster running continuously depending on the frequency of the task. The functional approach and fine-grained dependency management of gg give significant performance benefits when compiling large programs from a cold start. Here's a table showing a summary of the results for compiling Inkscape, an open-source software: Source: From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers When running “cold” on AWS Lambda, gg was nearly 5x faster than an existing icecc system, running on a 48-core or 384-core cluster of running VMs. To know more in detail, read the paper: From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers. You can also check out gg's code on GitHub. Also, watch the talk in which Keith Winstein, an assistant professor of Computer Science at Stanford University, explains the purpose of GG and demonstrates how it exactly works: https://www.youtube.com/watch?v=O9qqSZAny3I&t=55m15s Cloud computing trends in 2019 Cloudflare's Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Serverless Computing 101
Read more
  • 0
  • 0
  • 3493

article-image-google-cloud-and-nvidia-tesla-set-new-ai-training-records-with-mlperf-benchmark-results
Amrata Joshi
15 Jul 2019
3 min read
Save for later

Google Cloud and Nvidia Tesla set new AI training records with MLPerf benchmark results

Amrata Joshi
15 Jul 2019
3 min read
Last week, the MLPerf effort released the results for MLPerf Training v0.6, the second round of results from their machine learning training performance benchmark suite. These benchmarks are used by the AI practitioners to adopt common standards for measuring the performance and speed of hardware that is used to train AI models. As per these benchmark results, Nvidia and Google Cloud set new AI training time performance records. MLPerf v0.6 studies the training performance of machine learning acceleration hardware in 6 categories including image classification, object detection (lightweight), object detection (heavyweight), translation (recurrent), translation (non-recurrent) and reinforcement learning. MLPerf is an association of more than 40 companies and researchers from leading universities, and the MLPerf benchmark suites are being the industry standard for measuring machine learning performance.  As per the results, Nvidia’s Tesla V100 Tensor Core GPUs used an Nvidia DGX SuperPOD for completing on-premise training of the ResNet-50 model for image classification in 80 seconds. Also, Nvidia turned out to be the only vendor who submitted results in all six categories. In 2017, when Nvidia launched the DGX-1 server, it took 8 hours to complete model training. In a statement to ZDNet, Paresh Kharya, director of Accelerated Computing for Nvidia said, “The progress made in just a few short years is staggering." He further added, “The results are a testament to how fast this industry is moving." Google Cloud entered five categories and had set three records for performance at scale with its Cloud TPU v3 Pods. Google Cloud Platform (GCP) set three new performance records in the latest round of the MLPerf benchmark competition. The three record-setting results ran on Cloud TPU v3 Pods, are Google’s latest generation of supercomputers, built specifically for machine learning.  The speed of Cloud TPU Pods was better and used less than two minutes of compute time. The TPU v3 Pods also showed the record performance results in machine translation from English to German of the Transformer model within 51 seconds. Cloud TPU v3 Pods train models over 84% faster than the fastest on-premise systems in the MLPerf Closed Division. TPU pods has also achieved record performance in the image classification benchmark of the ResNet-50 model with the ImageNet data set, as well as model training in another object detection category in 1 minute and 12 seconds. In a statement to ZDNet, Google Cloud's Zak Stone said, "There's a revolution in machine learning.” He further added, "All these workloads are performance-critical. They require so much compute, it really matters how fast your system is to train a model. There's a huge difference between waiting for a month versus a couple of days." Google suffers another Outage as Google Cloud servers in the us-east1 region are cut off Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Google Cloud introduces Traffic Director Beta, a networking management tool for service mesh  
Read more
  • 0
  • 0
  • 2323

article-image-amazon-eventbridge-an-event-bus-with-higher-security-and-speed-to-boost-aws-serverless-ecosystem
Vincy Davis
15 Jul 2019
4 min read
Save for later

Amazon EventBridge: An event bus with higher security and speed to boost AWS serverless ecosystem

Vincy Davis
15 Jul 2019
4 min read
Last week, Amazon had a pretty huge news for its AWS serverless ecosystem, one which is being considered as the biggest thing since AWS Lambda itself. Few days ago, with an aim to help customers integrate their own AWS applications with Software as a Service (SaaS) applications, Amazon EventBridge was launched. The EventBridge model is an asynchronous, fast, clean, and easy to use event bus which can be used to publish events, specific to each AWS customer. The SaaS application and a code running on AWS are now independent of a shared communication protocol, runtime environment, or programming language. This allows Lambda functions to handle events from a Saas application as well as route events to other AWS targets. Similar to CloudWatch events, EventBridge also has an existing default event bus that accepts events from AWS services and calls to PutEvents. One distinction between them is that in EventBridge, each partner application that a user subscribes to will also create an event source. This event source can then be used to associate with an event bus in an AWS account. AWS users can select any of their event buses, create EventBridge Rules, and select Targets to invoke when an incoming event matches a rule. Important terms to understand the use of Amazon EventBridge Partner: An organization that has integrated their SaaS application with EventBridge. Customer: An organization that uses AWS, and that has subscribed to a partner’s SaaS application. Partner Name: A unique name that identifies an Amazon EventBridge partner. Partner Event Bus: An Event Bus that is used to deliver events from a partner to AWS. How EventBridge works for partners & customers A partner can allow their customers to enter an AWS account number and then select an AWS region. Next, CreatePartnerEventSource is called by the partner in the desired region and the customer is informed of the event source name. After accepting the invitation to connect, customers have to wait for the status of the event source to change to Active. Each time an event of interest to the customer occurs, the partner calls the PutPartnerEvents and reference the event source. Image Source: Amazon It works the same way for customers as well. Customer accepts the invitation to connect by calling CreateEventBus, to create an event bus associated with the event source. Customer can add rules and targets to prepare the Lambda functions to process the events. Associating the event source with an event bus also activates the source and starts the flow of events. Customers can use DeActivateEventSource and ActivateEventSource to control the flow. Amazon EventBridge is launched with ten partner event sources including Datadog, Zendesk, PagerDuty, Whispir, Segment, Symantec and more. This is pretty big news for users who deal with building serverless applications. With inbuilt partner integrations these partners can directly trigger an event in an EventBridge, without the need for a webhook. Thus “AWS is the mediator rather than HTTP”, quotes Paul Johnston, the ServerlessDays cofounder. He also adds that, “The security implications of partner integrations are the first thing that springs to mind. The speed implications will almost certainly be improved as well, with those partners almost certainly using AWS events at the other end as well.” https://twitter.com/PaulDJohnston/status/1149629728065650693 https://twitter.com/PaulDJohnston/status/1149629729571397632 Users are excited with the kind of creative freedom Amazon EventBridge will bring to their products. https://twitter.com/allPowerde/status/1149792437738622976 https://twitter.com/ShortJared/status/1149314506067255304 https://twitter.com/petrabarus/status/1149329981975040000 https://twitter.com/TobiM/status/1149911798256152576 Users with SaaS application can integrate with EventBridge Partner Integration. Visit the Amazon blog to learn the implementation of EventBridge. Amazon’s partnership with NHS to make Alexa offer medical advice raises privacy concerns and public backlash Amazon Aurora makes PostgreSQL Serverless generally available Amazon launches VPC Traffic Mirroring for capturing and inspecting network traffic
Read more
  • 0
  • 0
  • 2956
article-image-googles-language-experts-are-listening-to-some-recordings-from-its-ai-assistant
Bhagyashree R
12 Jul 2019
4 min read
Save for later

Google’s language experts are listening to some recordings from its AI assistant

Bhagyashree R
12 Jul 2019
4 min read
After the news of Amazon employees listening to your Echo audio recordings, now we have the non-shocker report of Google employees doing the same. The news was reported by Belgian public broadcaster, VRT NWS on Wednesday. Addressing this news, Google accepted in yesterday’s blog post that it does this to make its AI assistant smarter to understand user commands regardless of what their language is. In its privacy policies, the tech giant states, “Google collects data that's meant to make our services faster, smarter, more relevant, and more useful to you. Google Home learns over time to provide better and more personalized suggestions and answers.” Its privacy policies also have a mention that it shares information with its affiliates and other trusted businesses. What it does not explicitly say is that these recordings are shared with its employees too. Google hires language experts to transcribe audio clips recorded by Google’s AI assistant who can end up listening to sensitive information about users. Whenever you make a request to Google Home smart speaker or any other smart speaker for that matter, your speech is recorded. These audio recordings are sent to the servers of the companies that they use to train their speech recognition and natural language understanding systems. A small subset of these recordings, 0.2% in the case of Google, are sent to language experts around the globe who transcribe them as accurately as possible. Their work is not about analyzing what the user is saying, but, in fact, how they are saying it. This helps Google’s AI assistant to understand the nuances and accents of a particular language. The problem is these recordings often contain sensitive data. Google in the blog post claims that these audio snippets are analyzed in an anonymous fashion, which means that reviewers will not be able to identify the user they are listening to. “Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google,” the tech giant said. Countering this claim, VRT NWS was able to identify people through personal addresses and other sensitive information in the recordings. “This is undeniably my own voice,” said one man. Another family was able to recognize the voice of their son and grandson in the recording. What is worse is that sometimes these smart speakers record the audio clips entirely by accident. Despite the companies claiming that these devices only start recording when they hear their “wake words” like “Okay Google”, there are many reports showing the devices often start recording by mistake. Out of the thousand or so recordings reviewed by VRT NWS, 153 were captured accidentally. Google in the blog post mentioned that it applies “a wide range of safeguards to protect user privacy throughout the entire review process.” It further accepted that these safeguards failed in the case of the Belgian contract worker who shared the audio recordings to VRT NWS, violating the company’s data security and privacy rules in the process. “We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again,” the tech giant wrote. Companies not being upfront about the transcription process can cause legal trouble for them. Michael Veale, a technology privacy researcher at the Alan Turing Institute in London, told Wired that this practice of sharing personal information of users might not meet the standards set by the EU’s GDPR regulations. “You have to be very specific on what you’re implementing and how. I think Google hasn’t done that because it would look creepy,” he said. Read the entire story on VRT NWS’s official website. You can watch the full report on YouTube. https://youtu.be/x8M4q-KqLuo Amazon’s partnership with NHS to make Alexa offer medical advice raises privacy concerns and public backlash Amazon is being sued for recording children’s voices through Alexa without consent Amazon Alexa is HIPAA-compliant: bigger leap in the health care sector
Read more
  • 0
  • 0
  • 1943

article-image-what-to-expect-in-unreal-engine-4-23
Vincy Davis
12 Jul 2019
3 min read
Save for later

What to expect in Unreal Engine 4.23?

Vincy Davis
12 Jul 2019
3 min read
A few days ago, Epic released the first preview of Unreal Engine 4.23 for the developer community to check out its features and report back in case of any issues, before the final release. This version has new additions of Skin Weight Profiles, VR Scouting tools, New Pro Video Codecs and many updates on features like XR, animation, core, virtual production, gameplay and scripting, audio and more. The previous version, Unreal Engine 4.22 focused on adding photorealism in real-time environments. Some updates in Unreal Engine 4.23 XR Hololens 2 Native Support: With updates to the Stereo Panoramic Capture tool, it will be much easier to capture high-quality stereoscopic stills and videos of the virtual world in industry-standard formats, and to view those captures in an Oculus or GearVR headset. Stereo Panoramic capture Tool Improvements: This will make it easy to capture high-quality stereoscopic stills and videos of the virtual world in industry-standard formats. Animation Skin Weight Profiles: The new Skin Weight Profile system will enable users to override the original Skin Weights that are stored with a Skeletal Mesh. Animation Streaming: This is aimed at improving memory management for animation data. Sub Animation Graphs: New Sub Anim Graphs will allow dynamic switching of sub-sections of an Animation Graph, enabling multi-user-collaboration and memory savings for vaulted or unavailable items. Core Unreal Insights Tool: This will help developers to collect and analyze data about the Engine's behavior in a uniform fashion. This system has three components: The Trace System API will gather information from runtime systems in a consistent format and captures it for later processing. Multiple live sessions can contribute data at the same time. The Analysis API will process data from the Trace System API, and convert it into a form that the Unreal Insights tool can use. The Unreal Insights tool will provide an interactive visualization of data processed through the Analysis API, which will provide developers with a unified interface for stats, logs, and metrics from their application. Virtual production Remote Control over HTTP Extended LiveLink Plugin New VR Scouting tools New Pro Video Codecs nDisplay: Warp and Blend for Curved Surfaces Virtual Camera Improvements Gameplay & Scripting UMG Widget Diffing: Expanded and improved Blueprint Diffing will now support Widget Blueprints as well as Actor and Animation Blueprints. Audio Open Sound Control: It will enable a native implementation of the Open Sound Control (OSC) standard in an Unreal Engine plugin. Wave Table Synthesis: The new monophonic Wavetable synthesizer leverages UE4’s built-in curve editor to author the time-domain wavetables, enabling a wide range of sound design capabilities can be driven by gameplay parameters. There are many more updates provided for the Editor, Niagara editor, Physics simulation, Rendering system and the Sequencer multi-track editor in Unreal Engine 4.23. The Unreal Engine team has notified users that the preview release is not fully quality tested, hence should be considered as unstable until the final release. Users are excited to try the latest version of Unreal Engine 4.23. https://twitter.com/ClicketyThe/status/1149070536762372096 https://twitter.com/cinedatabase/status/1149077027565309952 https://twitter.com/mygryphon/status/1149334005524750337 Visit the Unreal Engine page for more details. Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR) Unreal Engine 4.20 released with focus on mobile and immersive (AR/VR/MR) devices What’s new in Unreal Engine 4.19?
Read more
  • 0
  • 0
  • 5511