Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-atomic-game-engine-how-become-contributor
RaheelHassim
05 Dec 2016
6 min read
Save for later

The Atomic Game Engine: How to Become a Contributor

RaheelHassim
05 Dec 2016
6 min read
What is the Atomic Game Engine? The Atomic Game Engine is a powerful multiplatform game development tool that can be used for both 2D and 3D content. It is layered on top of Urho3D, an open source game development tool, and also makes use of an extensive list of third-party libraries including Duktape, Node.js, Poco, libcurl, and many others. What makes it great? It supports many platforms such as Windows, OSX, Linux, Android, iOS and WebGL. It also has a flexible scripting approach and users can choose to code in C#, JavaScript, TypeScript or C++. There is an extensive library of example games available to all users, which show off different aspects and qualities of the engine. Image taken from: http://atomicgameengine.com/blog/announcement-2/ What makes it even greater for developers? Atomic has recently announced that it is now under the permissive MIT license. Errr great… What exactly does that mean? This means that Atomic is now completely open source and anyone can use it, modify it, publish it, and even sell it as long as the copyright notice remains on all substantial portions of the software. Basically, just don’t remove the text in the picture below from any of the scripts and it should be fine. Here’s what the MIT license in the Atomic Game Engine looks like:   Atomic Game  Engine MIT License Why should I spend time and effort contributing to the Atomic Game Engine? The non-restrictive MIT license makes it easy for developers to freely contribute to the engine and getting creative without the fear of breaking any laws. The Atomic Game Engine acknowledges all of their contributors by publishing their names to the list of developers working on the engine and contributors have access to a very active community where almost all questions are answered and developers are supported. As a junior software developer, I feel I’ve gained invaluable experience by contributing to open source software and it’s also a really nice addition to my portfolio. There is a list of issues available on the GitHub page where the issues have a difficulty level, priority, and issue type labeled. This is wonderful! How do I get started? Contributors can download the MIT Open Source code here: https://github.com/AtomicGameEngine/AtomicGameEngine *Disclaimer: This tutorial is based on using the Windows platform, SmartGit, and Visual Studio Community Version 2015. **Another Disclaimer: I wrote this tutorial with someone like myself in mind. i.e. amazingly average in many ways, but also relatively new to the industry and a first time contributor to open source software. Step 1: Install Visual Studio Community 2015 here. Visual Studio download page Step 2: Install CMake, making sure cmake is on your path. CMake install options Step 3: Fork the Atomic Game Engine’s repository to create your own version of it. a) Go to the AtomicGameEngine GitHub Page and click on the Fork button. This will allow you to experiment and make changes to your own copy of the engine without affecting the original version. Fork the repository b) Navigate to your GitHub profile and click on your forked version of the engine. GitHub profile page with repositories Step 4: Clone the repository and include all of the submodules. a) Click the green Clone or download button on the right and copy the web URL of your repository. Your AGE GitHub page b) Open up SmartGit (or any other Git Client) to clone the repository onto your machine. Clone repository in SmartGit c) Paste the URL you copied earlier into the Repository URL field. Copy remote url d) Include all Submodules and Fetch all Heads and Tags. Include all submodules e) Select a local directory to save the engine. Add a local directory to save the engine on your machine h) Your engine should start cloning... We’ve set everything up for our local repository. Next, we’d like to sync the original AtomicGameEngine with our local version of the engine so that we can always stay up-to-date with any changes made to the original engine. Step 4: Create an upstream branch. a)    Click Remote → Add →                       i)        Add the AtomicGameEngine Remote URL                      ii)        Name it upstream.  Adding an upstream to the original engine We are ready to start building a Visual Studio Solution of the engine. Step 5: Run the CMake_VS2015.bat batch file in the AtomicGameEngine directory. This will generate a new folder in the root directory, which will contain the Atomic.sln for Visual Studio. AGE directory At this point, we can make some changes to the engine (click here for a list of issues). Create a feature branch off the master for Pull Requests. Remember to stick to code conventions already being used. Once you’re happy with the changes you’ve made to the engine: -       Update your branch by merging in upstream. Resolve all conflicts and test it again. -       Commit your changes and Push them up to your branch. It’s now time to send a Pull Request. Step 6: Send a Pull Request. a)    Go to your fork of the AtomicGameEngine repository on GitHub. Select the branch you want to send through, and click New Pull Request. b)    Always remember to reference the Issue Number in your message to make it easier for the creators to manage the Issues List. Personal version of the AGE Your Pull Request will get reviewed by the creators and if the content is acceptable, it will get landed into the engine and you’ll become an official contributor to the Atomic Game Engine! Resources for the Blog: __________________________________________________________________________ [1] The Atomic Game Engine Website [2] Building the Atomic Editor from Source [3] GitHub Help: Fork a Repo [4] What I mean when I use the MIT license About the Author: RaheelHassim is a Software Developer who recently graduated from Wits University in Johannesburg, South Africa. She was awarded the IGDA Women in Games Ambassadors scholarship in 2016 and attended the Games Developers Conference. Her games career started at Luma Interactive where she became a contributor to the Atomic Game Engine. In her free time she binge watches Friends and plays music covers on her guitar.
Read more
  • 0
  • 0
  • 3521

article-image-stack-structure-managing-game-state
Ryan Roden-Corrent
05 Dec 2016
8 min read
Save for later

Stack Structure for Managing Game State

Ryan Roden-Corrent
05 Dec 2016
8 min read
Structuring the flow of logic in a game can be challenging. If you're not careful, you quickly end up with a scattered collection of state variables and conditionals that is difficult to wrap your head around. In my past two game projects, I found it helpful to structure my game flow as a stack of states. In this article, I'll give a quick overview of this technique and some examples of what makes it useful. The example code is written in D, but it should be pretty easy to apply in any language. Stacking States for Isolation Stacking states provides a nice way to isolate chunks of game logic from one another. I leveraged this while making damage_control, a game reminiscent of the Arcade/SNES title Rampart. In it, a match is divided into rounds, and each round passes through a series of phases. First you place some turrets in your territory, then you fire at your opponent, and then you try to repair the damage done during the firing phase. Before each phase, a banner scrolls across the screen telling the player what phase they are in. Here's the logic that sets up a new round (simplified from the original source for clarity): game.states.push( new ShowBanner("Place Turrets", game), new PlaceTurrets(game), new ShowBanner("Fire!", game), new Fire(game, _currentRound), new ShowBanner("Rebuild!", game), new PlaceWalls(game), new StatsSummary(game)); Because all of the states can be stacked up at once within a single function, none of the states have to be aware what state comes next. For example, PlaceWalls doesn't have to know to show a stats summary when it ends; it just pops itself off the stack when done and lets the next state kick in. The code shown above resides in the StartRound state, which sits at the bottom of the state stack. Once all the phases for the current round are popped, we once again enter StartRound and push a new set of states. The flow of states looks like this (the right side represents the top of the stack, or the active state): StartRound StartRound | StatsSummary | PlaceWalls | ShowBanner | Fire | ShowBanner | PlaceTurrets | ShowBanner StartRound | StatsSummary | PlaceWalls | ShowBanner | Fire | ShowBanner | PlaceTurrets StartRound | StatsSummary | PlaceWalls | ShowBanner | Fire | ShowBanner StartRound | StatsSummary | PlaceWalls | ShowBanner | Fire StartRound | StatsSummary | PlaceWalls | ShowBanner StartRound | StatsSummary | PlaceWalls StartRound | StatsSummary StartRound StartRound | StatsSummary | PlaceWalls | ShowBanner | Fire | ShowBanner | PlaceTurrets | ShowBanner ... and so on ... I'll provide another example at the end of the article, but first I'll discuss the implementation. The State interface State(T) { void enter(T); void exit(T); void run(T); } T is a generic type here, and represents whatever kind of object the states will operate on. For example, it might be a Game object that provides access to game entities, resources, input devices, and more. At any given time, you have a single active state;run is executed once for each update loop of the game. enter is called whenever a state becomes active, before the first call to run. This allows the state to perform any preparation it needs before it begins its normal flow of logic. Similarly, exit allows a state to perform some sort of tear-down before it becomes inactive. Note that enter and exit are not equivalent to a constructor and destructor; we will see later that a single state may enter and exit multiple times during its life. As an example, let's take the PlaceWalls state from earlier. enter might start a timer for how long the state should last, run would process input from the player to move and place pieces, and exit would mark off areas that the player had enclosed. The Stack The StateStack itself is pretty straightforward as well. It only needs to support three operations: push : place a state on top of the stack. pop : remove the state on top of the stack. run : cause the state on top of the stack to process its object. The only bit of trickiness comes in managing those enter and exit states mentioned earlier. The state stack must ensure that the following happens during a state transition: call enter once before calling run on a state that was previously inactive. callexit on a state that becomes inactive. enter and exitshould be called an equal number of times during a state's life. struct StateStack(T) { private { bool _entered; SList!State _stack; T _object; } void push(State!T[] states ...) { if (_entered) { _stack.top.exit(_object); _entered = false; } // Note that we push the new states, but do _not_ call enter() yet // If push is called again before run, we only want to enter the top state foreach_reverse(state ; states) { _stack.insertFront(state); } } void pop() { // get ref to current state, top may change during exit auto popped = _stack.top; _stack.removeFront; if (_entered) { // the state we are popping had been entered, so we need to exit it _entered = false; popped.exit(_object); } } void run(T obj) { // cache obj for calls to exit() that are triggered by pop(). _object = obj; // top.enter() could push/pop, so keep going until the top state is entered while(!_entered) { _entered = true; top.enter(obj); } // finally, our stack has stabilized top.run(obj); } } The implementation is mostly straightforward, but there are a few caveats. It is valid (and useful) for a state to push() and pop() states during its enter. In the previous example, StartRound pushes a number of states during enter. Therefore, implementing StateStack.run like so would be incorrect: if (!_entered) { _entered = true; top.enter(obj); } top.run(obj); After pushing StartRound and calling StateStack.run, it could call StartRound.enter, which would push more states onto the stack. It would then call top.run(obj) on whatever state was last pushed, which hasn't been entered yet! For this reason, run uses the while (!_entered) loop to call enter until the stack 'stabilizes'. Similarly, a state may push or pop states during its exit call. To support this, we need to cache the object that get passed in to run so it can be used by pop. Dissolving Complex Logic Flows In Terra Arcana, the StateStack, a turn-based strategy game I developed, made the flow of combat manageable. Here's a quick description of the rules regarding attacks: The attacker launches one or more strikes against the defender. Each strike may hit (dealing damage or some effect) or miss. If the defender's health has dropped to 0, they are destroyed. The defender may get a chance to counter-attack if: They were not destroyed by the initial attack. They have an attack that is in range of the attacker. They have enough AP (action points) to use said attack. The counter-attack, like the initial attack, may have multiple strikes. The counter-attack may destroy the attacker. You cannot counter-attack a counter-attack. Now consider that an AOE attack may hit multiple defenders, each of which gets a chance to counter-attack! Now, computing the result of this isn't so bad -- you can probably imagine a series of if/else statements that could do the job in a single pass. The difficulty depicting the result to the player. We need to play animations for attacks and unit destruction, pop up text to indicate damage and status effects (or lack thereof), manipulate health/AP bars on the UI, and play sound effects at various points throughout the process. This all happens over the course of multiple update cycles rather than a single function call, so managing it with a single function would involve a whole mess of state variables (attackCount, isAnimating, isDefenderDestroyed, isCounterAttackInProgress, ect.). With a StateStack we can separate chunks of logic like applying damage or status effects, destroying a unit, and initiating a counter attack into its own independent state. When an attack begins, you push a whole bunch of these onto the stack at once, and then let everything play out. Here's an excerpt of code that initiates an attack: battle.states.popState(); foreach(unit ; unitsAffected) { battle.states.push(newPerformCounter(unit, _actor)); } foreach(unit ; unitsAffected) { battle.states.push(new CheckUnitDestruction(unit)); for(int i = 0 ; i < _action.hits ; i++) { battle.states.push(new ApplyEffect(_action, unit)); } } Remember that we are dealing with a stack, so states pushed later end up at the top (ApplyEffect happens before CheckUnitDestruction). This logic resides in the PerformAction state, so the first call removes this state from the stack before pushing the rest on. To understand this a bit better, consider the following scenario: A unit launches an attack that hits twice. The target is not destroyed, and is capable of countering with an attack that hits three times. The states on the stack would progress like so (where the right side represents the top of the stack): PerformAction PerformCounter | CheckUnitDestruction | ApplyEffect | ApplyEffect PerformCounter | CheckUnitDestruction | ApplyEffect PerformCounter | CheckUnitDestruction PerformCounter CheckUnitDestruction | ApplyEffect | ApplyEffect | ApplyEffect CheckUnitDestruction | ApplyEffect | ApplyEffect CheckUnitDestruction | ApplyEffect CheckUnitDestruction Note that when PerformCounter becomes the active state, it replaces itself with three ApplyEffects and a CheckUnitDestruction. The states nicely encapsulate specific chunks of game logic, so we get to reuse the same states in PerformAction and PerformCounter. Author: Ryan Roden-Corrent is a software developer by trade and hobby. He is an active contributor in the free/open-source software community and has a passion for simple but effective tools. He started gaming at a young age and dabbles in all aspects of game development, from coding to art and music. He's also an aspiring musician and yoga teacher. You can find his open source work here and Creative Commons art here.
Read more
  • 0
  • 0
  • 2127

article-image-what-does-brutalist-web-design-tell-us-about-2016
Erik Kappelman
02 Dec 2016
5 min read
Save for later

What Does Brutalist Web Design Tell Us About 2016?

Erik Kappelman
02 Dec 2016
5 min read
Brutalist web design is a somewhat new design philosophy in web development. Brutalist web design is characterised by websites that are intentionally difficult to navigate and use, lack of smooth lines or templates, and hyper-individualization and uniqueness. Some examples of Brutalist websites include the websites for Adult Swim, The Drudge Report, and Hacker News. These and other websites like them are termed ‘Brutalist’ in reference to a mid-20th century architectural movement of the same name. Brutalist buildings use exposed concrete, are modular in design, and choose function over form in most cases. Brutalist buildings are imposing, and can loom menacingly. With this in mind, Brutalist web design is well named. Both styles are appreciated for their economically sound foundations, artistic ‘honesty,’ and anti-elitist undertones. So, that's what Brutalist web design is, but what does Brutalist web design mean for the year 2016 and years to come? In order to answer this question, I think it is good to take a look back at the origins of the Internet. At its core, the Internet is a set of protocols used to seamlessly connect networks with other networks, allowing individuals to instantly share information. One of the fundamental tools for the network of networks is a universal way of displaying information—enter HTML, and eventually CSS. HTML and CSS have held up the design end of the greatest technological renaissance in human history from the years 1993 through present day. Information is displayed in new ways, in new places, and at new speeds. All this took was an incredible amount of work by designers and developers. Today, despite the latest versions of HTML and CSS still being behind the front-end of basically every website on the planet, any web designer looking for a job knows that mastering tools that wrap around HTML and CSS like WordPress, Bootstrap and Sass is more important than improving the ability to hard-code HTML and CSS. WordPress and Bootstrap and tools like them were born of necessity. People began to demand more and more ornate websites as the Internet proliferated, and certain schemas and templates began to become more popular than others. Designers created tools that allowed them to create flashy websites fast, in order to meet demand. The end of this story is the modern Internet—extremely well designed websites that are almost indistinguishable from one another. Brutalist web design is a response to this evolution. Brutalism demands what a website can do be the measure of the website’s value, instead of how it looks. Also, principles of Brutalist web design would suggest that templates are the antithesis to creativity. As someone who has some experience with web design, I understand where Brutalism is coming from. It's difficult when a client is wrapped up in whether or not a menu bounces off the top of the pane as the user scrolls, but shrugs off a site’s ability to change content based on a user’s physical location. That said, is 2016 the beginning of the Brutalist web design revolution? Well I would ask, did the Brutalist architectural movement last? Given that Brutalist buildings were only built for about 20 years, and the age of the Internet moves faster and faster every day, I would suggest that Brutalist web design will likely be unheard of in less than five years. I believe this for two reasons. First, it just seems too much like a fad to not be a fad. Websites that look like they came from the early nineties, websites that are hard to navigate, named after a niche architectural movement from the seventies… This all screams fad to me. Second, people like aesthetics, because people are lazy. Aesthetics themselves are a reflection of our own laziness. We like to be told what to like and what is beautiful. Greco-Roman architecture is still seen around the world after thousands of years, because the aesthetic is pleasing. The smooth lines of buildings like the Guggenheim in New York City or the color and form of Saint Basil's Cathedral in Moscow, show that many people like the meticulous design that is seen in websites like Twitter or FiveThirtyEight, and many others. But, I still haven’t answered the title question. I think that Brutalist web design means that in 2016 we are on the cusp of some real changes in the way that we view and share information. Ubiquitous wearable tech and the ‘Internet of Things’ are just two of the many big changes right around the corner. Brutalism feels like a step sideways rather than a step forward. We may be taking sidesteps, because steps forward are currently indeterminate or difficult. In the simplest terms, Brutalism means that in 2016 some people are trying to break out of a design system that has gotten better, but hasn’t fundamentally changed in over 20 years. Brutalist web design suggests that web design is likely to experience tremendous changes in the near future as the Internet itself changes. Traditional aesthetics will likely endure, but Brutalist web design suggests people are bored and want more. What exactly that is, only time will tell. Author: Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company.
Read more
  • 0
  • 0
  • 1267

article-image-how-master-continuous-integration-tools-and-strategies
Erik Kappelman
22 Nov 2016
4 min read
Save for later

How to master Continuous Integration: Tools and Strategies

Erik Kappelman
22 Nov 2016
4 min read
As time moves forward, the development of software is becoming more and more geographically and temporally fragmented. This is due to the globalized world in which we now live, and the emergence of many tools and techniques allowing many types of work, including software development, to be decentralized. As with any powerful tool, software development tools that facilitate the sharing of code by multiple developers simultaneously need to be managed appropriately or the results can be quite negative. Continuous integration is one strategy for managing large software codebases through the development process. Continuous integration is really quite simple on the surface. In order to reduce extra work that accompanies a branch and the mainline of code becoming non-integrable, continuous integration advocates for, well, continuous integration. The basic tenets of continuous integration are fairly well known, but they are worth explicitly mentioning. Projects need to have a code repository; if this isn’t a given in any of your development processes it needs to be. Automating the build of projects can also increase the efficacy of continuous integration. Part of an automated build should also include self-testing in a production level environment. Testing should be performed using the most up-to-date build of a project to ensure the tests are being performed on the correct codebase. All developers need to commit their changes at the absolute least once every working day. These could be considered the basic requirements for a development process that is continuously integrated. While this blog could focus on one specific continuous integration tool, I think an overview of a few tools to get someone started in the right direction is better. This Wikipedia page compares some different continuous integration softwares available. These softwares are available under both proprietary and open licenses. A really great starting tool to learn would be Buildbot. There a few reasons to like this tool. First of all, it's very much open source and completely free. Also, it uses Python as its configuration and control language. Python is an all around good language, and lends itself very well to configuration of other software. Buildbot is a full-fledged continuous integration tool, supporting automated builds and testing and a variety of change notification tools. This all ships as a Python package, meaning its installation and use does not tax resources very much. The tutorial on the Buildbot website will get you up and running and their manual is incredibly detailed. Buildbot is an excellent starting point for someone who is attempting to bring continuous integration into their development process, or someone interested in expanding their skillset. Not everyone is a lover of Python, but many people are lovers of Node.js. For the Node.js aficionado, another continuous integration open-source solution is NCI. NCI, short for Node.js Continuous Integration, is a tool written in and used by Node.js. The JavaScript base for Node is a powerful attraction for many people, especially beginners, who have the most experience coding with JavaScript. Using Node.js does introduce the requirement of having Node.js, which can be onerous, but Node is worth installing if you don’t have it already. If you use Node already, NCI can be installed using npm, because it is a Node package. A basic start-up tutorial for NCI is located here. The documentation is not as clear, or as large as that of Buildbot. This is in part because NCI is part of Node, so many of its plugins and dependencies have separate documentations. NCI is also a bit less powerful than Buildbot when it ships. One of the benefits of NCI is its modularity. Servers can be large and complex or small and simple; it just depends on what the user wants. To end on somewhat of a side-note, some continuous integration tools may be simply too powerful and complex for a given developers needs. I myself work with one other developer on certain projects. These projects tend to be small and the utility of a full-fledged continuous integration solution is really less than the cost or hassle. One example of powerful collaboration software that has many continuous integration elements is GitLab. GitLab could definitely effectively perform as a full-fledged continuous integration solution. The community version of GitLab would be better suited as simply a collaboration tool for smaller projects. Author: Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company.
Read more
  • 0
  • 0
  • 1655

article-image-elm-and-typescript-static-typing-frontend
Eduard Kyvenko
16 Nov 2016
5 min read
Save for later

Elm and TypeScript – Static typing on the Frontend

Eduard Kyvenko
16 Nov 2016
5 min read
This post explores the functional aspect of both Elm and TypeScript, providing a better understanding of both programming languages. Elm and TypeScript both use JavaScript as a compile target, but they do have major differences, which we’ll examine here. Elm Elm is a functional language with a static type analysis and strong inferred type system. In other words, the Elm compiler only runs type checks during compilation and can predict types for all expressions without having explicit type annotations. This guarantees the absence of runtime exceptions due to type mismatch. Elm supports generic types and structural type definitions. TypeScript TypeScript is a superset of JavaScript, but unlike Elm it is a multi-paradigm language with a strong imperative part and a significantly weaker functional part. It also has static type checker and supports generic types and structural typing. TypeScript also has type inference, and although it is not as reliable as Elm’s, it’s still quite useful. Functions Let’s see how both languages handle functions. Elm Functions is where Elm shines. Function declarations and lambda expressions are supported. Type definitions are simple and robust. It is worth mentioning that lambda expressions can only have a type definition when used as a return value from a function declaration. -- Function declaration add : Int -> Int -> Int add x y = x + y -- Function expression x y -> x + y -- Type definition for lambda expression add : Int -> Int -> Int add = x y -> x + y TypeScript TypeScript can offer the standard set from JavaScript: function declaration, lambda expression and arrow functions. // Function declaration function add(x: number, y: number): number { return x + y; } // Function expression const add = function (x: number, y: number): number { return x + y; } // Arrow function expresion const add = (x: number, y: number): number => { return x + y; } Structural typings Let’s see how they both stack up. Elm The structural type annotation definition is available for Tuples and Records. Records are primarily used for modelling abstract data types. Type aliases can be used as value constructors of the said data type. type alias User = { name: String , surname: Maybe String } displayName : User -> String displayName { name, surname } = case surname of Just value -> name ++ " " ++ value Nothing -> name displayName (User "John" (Just "Doe")) -- John Doe TypeScript Structural typing is done with interfaces, so it is possible to deconstruct values. It is also worth mentioning that classes can extend interfaces. interface User { name: string; surname?: string; } function displayName ({ name, surname="" }:User ):string { return name + ' ' + surname } console.log(displayName({ name: 'John', surname: 'Doe' })) Union Types Elm and TypeScript both handle union types. Elm Union Types in Elm are the essential tool for defining abstract data structures with dynamic nature. Let’s have a look at the Maybe type. The union describes two possible states—the closest you can get to this in TypeScript is an optional value.  Type variable a points out that a stored value might belong to any data type. type Maybe a = Just a | Nothing This might be useful and make a lot of sense in a functional language. Here is an example of how you might use it. If you want to crash the program explicitly, then there is a logical error in the state of the application. displayName userName = case userName of Just name -> String.toUpper name Nothing -> Debug.crash "Name is missing" displayName (Just "Bond") -- BOND displayName Nothing -- Cause run-time error explicitly TypeScript Union types in TypeScript are currently referred to as Discriminated Union. Here is an example of using union todefine a list of available actions for a dispatcher. interface ActionSendId { name: "ID", data: number } interface ActionSendName { name: "NAME", data: string } function dispatch(action:ActionSendId | ActionSendName):void { switch (action.name) { case "ID": sendId(action.data) break; case "NAME": sendName(action.data) break; default: break; } } Interoperation with JavaScript I will only focus on aspects that are affected by the implementation of a type system. Elm You can pass values once from JavaScript to Elm during the initialization process. There is a special type of program for that called programWithFlags. The Elm application can inter-operate with JavaScript directly using special interfaces, called ports. It implementsa Signal pattern. Sending a value of an unexpected type will cause an error. During HTTP communication, you have to decode and encode values using Json.Decode and Json.Encode. During DOM Events, you can use Json.Decode to retrieve values from an Event object. TypeScript Using JavaScript with TypeScript is quite possible, but you will have to specify type definitions for the code. As an option, you can use a special type: any. The any type is a powerful way to work with existing JavaScript, allowing you to gradually optin and optout of typechecking during compilation. As an alternative, you might have to provide typing files. Conclusion Both Elm and TypeScript have their strengths and weaknesses, but despite all of the differences, both type systems give you similar benefits. Elm has the upper hand with type inference, thanks to the purely functional nature of the language and strict inter-operation with the outside world. TypeScript does not guarantee type-error-free runtime, but it’s easier pickup and very intuitive if you have JavaScript or a C# background. About the author Eduard Kyvenko is a frontend lead at Dinero. He has been working with Elm for over half a year and has built a tax return and financial statements app for Dinero. You can find him on GitHub at @halfzebra.
Read more
  • 0
  • 0
  • 2442

article-image-building-better-bundles-why-processenvnodeenv-matters-optimized-builds
Mark Erikson
14 Nov 2016
5 min read
Save for later

Building Better Bundles: Why process.env.NODE_ENV Matters for Optimized Builds

Mark Erikson
14 Nov 2016
5 min read
JavaScript developers are keenly aware of the need to reduce the size of deployed assets, especially in today's world of single-page apps. This usually means running increasingly complex JavaScript codebases through build steps that produce a minified bundle for deployment. However, if you read a typical tutorial on setting up a build tool like Browserify or Webpack, you'll see numerous references to a variable called process.env.NODE_ENV. Tutorials always talk about how this needs to be set to a value like "production" in order to produce a properly optimized bundle, but most articles never really spell out why this value matters and how it relates to build optimization. Here's an explanation of why process.env.NODE_ENV is used and how it fits into the typical build process. Operating system environment variables are widely used as a method of configuring applications, especially as a way to activate behavior based on different deployment environments (such as development vs testing vs production). Node.js exposes the current process's environment variables to the script as an object called process.env. From there, the Express web server framework popularized using an environment variable called NODE_ENV as a flag to indicate whether the server should be running in "development" mode vs "production" mode. At runtime, the script looks up that value by checking process.env.NODE_ENV. Because it was used within the Node ecosystem, browser-focused libraries also started using it to determine what environment they were running in, and using it to control optimizations and debug mode behavior. For example, React uses it as the equivalent of a C preprocessor #ifdef to act as conditional checking for debug logging and perf tracking, roughly like this: function someInternalReactFunction() { // do actual work part 1 if(process.env.NODE_ENV === "development") { // do debug-only work, like recording perf stats } // do actual work part 2 } If process.env.NODE_ENV is set to "production", all those if clauses will evaluate to false, and the potentially expensive debug code won't run. In addition, in conjunction with a tool like UglifyJS that does minification and removal of dead code blocks, a clause that is surrounded with if(process.env.NODE_ENV === "development") will become dead code in a production build and be stripped out, thus reducing bundled code size and execution time. However, because the NODE_ENV environment variable and the corresponding process.env.NODE_ENV runtime field are normally server-only concepts, by default those values do not exist in client-side code. This is where build tools such as Webpack's DefinePlugin or the Browserify Envify transform come in, which perform search-and-replace operations on the original source code. Since these build tools are doing transformation of your code anyway, they can force the existence of global values such as process.env.NODE_ENV. (It's also important to note that because DefinePlugin in particular does a direct text replacement, the value given to DefinePlugin must include actual quotes inside of the string itself. Typically, this is done either with alternate quotes, such as '"production"', or by using JSON.stringify("production")). Here's the key: the build tool could set that value to anything, based on any condition that you want, as you're defining your build configuration. For example, I could have a webpack.production.config.js Webpack config file that always uses the DefinePlugin to set that value to "production" throughout the client-side bundle. It wouldn't have to be checking the actual current value of the "real" process.env.NODE_ENV variable while generating the Webpack config, because as the developer I would know that any time I'm doing a "production" build, I would want to set that value in the client code to "production'. This is where the "code I'm running as part of my build process" and "code I'm outputting from my build process" worlds come together. Because your build script is itself most likely to be JavaScript code running under Node, it's going to have process.env.NODE_ENV available to it as it runs. Because so many tools and libraries already share the convention of using that field's value to determine their dev-vs-production status, the common convention is to use the current value of that field inside the build script as it's running to also determine the value of that field as applied to the client code being transformed. Ultimately, it all comes down to a few key points: NODE_ENV is a system environment variable that Node exposes into running scripts. It's used by convention to determine dev-vs-prod behavior, by both server tools, build scripts, and client-side libraries. It's commonly used inside of build scripts (such as Webpack config generation) as both an input value and an output value, but the tie between the two is still just convention. Build tools generally do a transform step on the client-side code, replace any references to process.env.NODE_ENV with the desired value, and the resulting code will contain dead code blocks as debug-only code is now inside of an if(false)-type condition, ensuring that code doesn't execute at runtime. Minifier tools such as UglifyJS will strip out the dead code blocks, leaving the production bundle smaller. So, the next time you see process.env.NODE_ENV mentioned in a build script, hopefully you'll have a much better idea why it's there. About the author Mark Erikson is a software engineer living in southwest Ohio, USA, where he patiently awaits the annual heartbreak from the Reds and the Bengals. Mark is author of the Redux FAQ, maintains the React/Redux Links list and Redux Addons Catalog, and occasionally tweets at @acemarke. He can be usually found in the Reactiflux chat channels, answering questions about React and Redux. He is also slightly disturbed by the number of third-person references he has written in this bio!
Read more
  • 0
  • 0
  • 19479
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-stop-thinking-components
Eduard Kyvenko
11 Nov 2016
5 min read
Save for later

Stop thinking in components

Eduard Kyvenko
11 Nov 2016
5 min read
In this post, you will get a high-level overview of the theoretical part behind modern architecture of client-side Web applications, establish basic terminology, and define the main problem of Component-based software engineering. You will be able to translate that knowledge on to real examples of component-based technologies, such as React and Angular, to see how they are different from Elm. Components: the building blocks of modern Web UI Component-based development is lingua franca of the modern frontend programming. We have been using this term for years, but what is a component exactly? The definition of Component From Computer Science we know that Component is a software package or a module which encapsulates related functions or data. Component, as a design pattern, originates from the traditional Object Oriented Programming. Component-based Web application consists of multiple components and some sort of infrastructure for component communication. In a real world it is usually represented by a class, for example,  Angular 2 Component or React Component. Please, don’t be confused by Web Components. Object Oriented Programming and Components Traditional Object Oriented Programming, as a design pattern, focuses on building class hierarchy with a chain of inheritance. Despite the fact, that Component-based software engineering is still usingclasses, it prefers the composition over inheritance. Modern Web applications usually implement an architecture that includes Components and an infrastructure for their communication. The problem of distributed state Components rely on classes, which contain mutable state as properties. JavaScript is well known for the reputation of one of the mainstream asynchronous programming languages. Maintaining distributed state with many relations is hard when every piece of data is scattered around the components and it’s not getting easier when code is executed asynchronously. Asynchronous state management is a hard task, which is getting only worse with the scale. Many of JavaScript libraries and ES2015 spec attempt to provide an experience of synchronous-looking code, while we’re actually writing asynchronous code, which often leads to a situation where we have a problem with promises. Many design patterns are aiming on solving the problem of Component Communication with distributed state. Observer pattern is one of the traditional ways to establish Component communication in Component-based application. From your frontend experience, you might be familiar with the three main types of Observer pattern: Event Emitter EventEmitter from Node.js or Events mixin from Backbone.js library. Pub Sub Not as widely used, because it requires you to follow certain architecture, such as  PubSubJS Signal(often referred to as a Stream) See rxjs or xstream Redux was a big game changer in state management with React, but any great power comes with responsibility. A lot of people have been struggling with Redux, which ledto a logical conclusion in the post by Dan Abramow You Might Not Need Redux. The problem is that the functional patterns are harder to implement in a language that has a strong imperative side. Elm Architecture Elm application does not have components. Primary building blocks are pure functions. Minimal example of an application consists of three main parts: Update function This is a function where you handle state changes; it accepts the current state and returns the next state. View function Produces the output of your program; it accepts only one argument, which is the state of your application. Initial state of the model This function simply returns the initial state of your application. Technically, the whole Elm application is a component. It exposes a Signal or Reactive Stream API for interpretation with the outside world or other components. State is a unified storage of all data in the application. Every stateful Elm application is built with a module called Html.App. As of today, Elm’s main focus is on applications that produce a DOM tree as an output. Stop thinking about components Component communication is a term, which refers to an implementation of a certain infrastructure between two or more inter-operating classes with encapsulated state. Elm Architecture composes update functions to form a pipeline for changing one unified immutable state, which is composed out of functions that produce the initial state. Code splitting is not the same as componentization, you can have as many modules as you want. You can check out an example of module composition in Elm on GitHub; it implements the so-called Fractal Architecture. Practical advice on scaling Elm applications Code organization should not be prioritized over business logic. Code splitting is rather harmful during the early stages of development. Omit type signatures at the beginning. Without specific signatures you will have more freedom while experimenting. Elm code is extremely easy to refactor, so you can focus on getting stuff done and clean-up later. When refactoring, try to do one thing at a time; the compiler is extremely useful, but if you don’t define a limited scope for the refactoring process, you will get overwhelmed by compiler errors. Elm is offering an alternative Component-based development is lingua franca of the modern frontend programming. Elm Architecture is a polished design pattern, which offers a lot of interesting ideas, and some of them are drastically different from what we are used to. While component-based architectures, written in imperative languages, have their strong sides, Elm can provide a better way for implementing asynchronous flows. Function-based business logic is way more reliable and easier to test. About the author Eduard Kyvenko is Front-End Lead at Dinero. He has been working with Elm for over half a year and has built Tax return and financial statements App for Dinero. You can find him on Github at @halfzebra.
Read more
  • 0
  • 0
  • 1149

article-image-look-webvr-development
Paul Dechov
08 Nov 2016
7 min read
Save for later

A Look into WebVR Development

Paul Dechov
08 Nov 2016
7 min read
Virtual reality technology is right in the middle of becoming massively available. But it has been considered farfetched—a distant holy grail largely confined to fiction—for long enough that it is still too easy to harbor certain myths: Working within this medium must require a long and difficult journey to pick up a lot of prerequisite expertise. It must exist beyond the power and scope of the web browser, which was not intended for such rich media experiences. Does VR belong in a Web Browser? The Internet is, of course, an astoundingly open and democratic communications medium, operating at an incredible scale. Browsers, considered as general software for delivering media content over the Internet, have steadily gained functionality and rendering power, and have been extraordinarily successful in combining media content with the values of the internet. The native rendering mechanism at work in the browser environment consumes descriptions of: * Structure and content in HTML: simple, static, and declarative * Styling in CSS (optionally embedded in HTML) * Behavior in JavaScript (optionally embedded in HTML): powerful but complex and loosely structured (oddly too powerful in some ways, and abuses such as pop-up ads were originally its most noticeable applications) The primary metaphor of the Web as people first came to know it focused on traveling from place to place along connected pathways, reinforced by terms like "navigate", "explore", and "site". In this light, it is apparent that the next logical step is right inside of the images, videos and games that have proliferated around the world thanks to the web platform, and that this is no departure from the ongoing evolution of that platform. On social media, we share a lot of content, but interact using low-bandwidth channels (limited media snippets—mostly text). Going forward, these will increasingly blend into shared virtual interactions of unlimited sensory richness and creativity. This has existed to a small extent for some time in the realm of first-person online games, and will see its true potential with the scale and generality of the Web and the immersive power of VR. A quick aside on compatibility and accessiblity: the consequence of such a widely available Web is that your audience has a vast range of different devices, as well as a vast range of different sensorimotor capabilities. It is quite relevant when pushing the limits of the platform, and always wise to consider this: how the experience degrades when certain things are missing, and how best to fall back to an acceptable (not broken) variant of the content in these cases. I believe that VR and the Web will accelerate each other's growth in this third decade of the Web, just as we saw with earlier media: images in its first decade and video in its second. Hyperlinks, essential to the experience of the Web, will teleport us from one site to another: this is one of many research avenues that has been explored by eleVR, a pioneering research team led by Vi Hart and funded by Y Combinator Research. Adopted enthusiastically by the Chrome and Firefox browsers, and with Microsoft recently announcing support in its Edge browser, it is fair to say that widespread support for WebVR is imminent. For browsers that are not likely to support the technology natively in the near future (for example, Safari), there is a polyfill you can include so that you can use it anyway, and this is one of the things A-Frame takes care of for you. A-Frame A-Frame is a framework from Mozilla that: Provides a bundle of conveniences that automatically prepares your client-side app environment for VR. Exposes a declarative interface for the composition of modules (aspects of appearance of functionality). Exposes a simple interface for writing your own modules, and encourages sharing and reusability of modules in the A-Frame community. The only essential structural elements are: * <a-scene>, the container * <a-entity>, an empty shell with transform attributes like position, rotation, and scale, but with neither appearance nor behavior. HTML was designed to make web design accessible to non-programmers and opened up the ability for those without prior technical skills to learn to build a home page. A-Frame brings this power and simplicity to VR, making the creation of virtual environments accessible to all regardless of their experience level with 3D graphics and programming. The A-Frame Inspector also provides a powerful UI for VR creation using A-Frame. Example: Visualization Some stars are in fact extremely bright while others appear bright because they are so near (such as the brightest star in the sky, Sirius). I will describe just one possible use of VR to communicate these differences in star distance effectively. Setting the scene: <a-scene> <a-sky color="black"></a-sky> <a-light type="ambient" color="white"></a-light> </a-scene> A default camera is automatically configured for us, but if we want a different perspective we could add and position an <a-camera> element. Using a dataset of stars' coordinates and visual magnitudes, we can plot the brightest stars in the sky to create a simulated sky view (a virtual stellarium), but scaled so as to be able to perceive the distances with our eyes, immediately and intuitively. The role of VR in this example is to hook into our familiar experience of looking around at the sky and our hardwired depth perceptivity. This approach has potential to foster a deeper and more lasting appreciation of the data than numbers in a spreadsheet can, and then abstract one- or two-dimensional depictions can, for that matter. Inside the scene, per star: <a-sphere position="100 120 -60" radius="1" color="white"> The position would be derived from the coordinates of the star, and the radius could reflect the absolute visual magnitude of the star. We could also have the color reflect the spectral range of the star. A-Frame includes basic interaction handlers that work with a variety of rendering modes. While hovering over (or in VR, gazing at) a star, we could set up JavaScript handlers to view more information about it. By clicking on (in VR, pressing the button while gazing at) one of these stars, we could perhaps transform the view by shifting the camera and looking at the sky from that star's point of view. Or we could zoom in to a detailed view of that star, visualizing its planetary system, and so on. Now, if we were to treat this visualization as a possible depiction of abstract data points or concepts, we can represent. For instance, the points in space could be people, and the distance could represent, perhaps, any combination of weighted criteria. You would see with the full power of your vision how near or far others are in terms of these criteria. This simple perspective enables immersive data storytelling, allowing you to examine entities in any given domain space. My team at TWO-N makes heavy use of D3 and React in our work (among many other open source tools), both of which work seamlessly and automatically with A-Frame due to the nature of the interface that A-Frame provides. Whether you're writing or generating literal HTML, or relying on tools to help you manage the DOM dynamically, it's ultimately all about attaching elements to the document and setting their attributes; this is the browser's native content rendering engine, around which the client-side JavaScript ecosystem is built. About the author Paul Dechov is a visualization engineer at [TWO-N].
Read more
  • 0
  • 0
  • 1350

article-image-five-biggest-challenges-information-security-2017
Charanjit Singh
08 Nov 2016
5 min read
Save for later

Five Biggest Challenges in Information Security in 2017

Charanjit Singh
08 Nov 2016
5 min read
Living in the digital age brings its own challenges. News of security breaches in well-known companies is becoming a normal thing. In the battle between those who want to secure the Internet and those who want to exploit its security vulnerabilities, here's a list of five significant security challenges that I think information security is/will be facing in 2017. Army of young developers Everyone's beloved celebrity is encouraging the population to learn how to code, and it's working. Learning to code is becoming easier every day. There are loads of apps and programs to help people learn to code. But not many of them care to teach how to write secure code. Security is usually left as an afterthought, an "advanced" topic to learn sometime in future. Even without the recent fame, software development is a lucrative career. It has attracted a lot of 9-to-5ers who just care about getting through the day and collecting their paycheck. This army of young developers who care little about the craft is most to blame when it comes to vulnerabilities in applications. It would astonish you to learn how many people simply don't care about the security of their applications. The pressure to ship and ever-slipping deadlines don't make it any better. Rise of the robots I mean IoT devices. Sorry, I couldn't resist the temptation. IoT devices are everywhere. "Internet of Things" they call it. As if Internet wasn't insecure enough already, it's on "things" now. Most of these things rarely have any concept of security. Your refrigerator can read your tweets, and so can your 13-year-old neighbor. We've already seen a lot of famous disclosures of cars getting hacked. It's one of the examples of how dangerous it can get. Routers and other such infrastructure devices are becoming smarter and smarter. The more power they get, the more lucrative they become for a hacker to attack them. Your computer may have a firewall and anti-virus and other fancy security software, but your router might not. Most people don't even change the default password for such devices. It's much easier for an attacker to simply control your means of connecting to the Internet than connecting to your device directly. On the other front, these devices can be (and have been) used as bots to launch attacks (like DDoS) elsewhere. Internet advertisements as malware The Internet economy is hugely dependent on advertisements. Advertisements is a big big business, but it is becoming uglier and uglier every day. As if tracking users all over the webs and breaching their privacy was not enough, advertisements are now used for spreading malware. Ads are very attractive to attackers as they can be used to distribute content on fully legitimate sites without actually compromising them. They've already been in the news for this very reason lately. So the Internet can potentially be used to do great damage. Mobile devices Mobile apps go everywhere you go. That cute little tap game you installed yesterday might result in the demise of your business. But that's just the tip of the iceberg. Android will hopefully add essential features to limit permissions granted to installed apps. New exploits are emerging everyday for vulnerabilities in mobile operating systems and even in the processor chips. Your company might have a secure network with every box checked, but what about the laptop and mobile device that Cindy brought in? Organizations need to be ever more careful about the electronic devices their employees bring into the premises, or use to connect to the company network. The house of security cards crumbles fast if attackers get access to the network through a legitimate medium. The weakest links If you follow the show Mr. Robot (you should, it's brilliant), you might remember a scene from the first Season when they plan to attack the "impenetrable" Steel Mountain. Quoting Elliot: Nothing is actually impenetrable. A place like this says it is, and it’s close, but people still built this place, and if you can hack the right person, all of a sudden you have a piece of powerful malware. People always make the best exploits. People are the weakest links in many technically secure setups. They're easiest to hack. Social engineering is the most common (and probably easiest) way to get access to an otherwise secure system. With the rise in advanced social engineering techniques, it is becoming crucial everyday to teach the employees how to detect and prevent such attacks. Even if your developers are writing secure code, it's doesn’t matter if the customer care representative just gives the password away or grants access to an attacker. Here's a video of how someone can break into your phone account with a simple call to your phone company. Once your phone account is gone, all your two-factor authentications (that depend on SMS-based OTPs) are worth nothing. About the author Charanjit Singh is a freelance JavaScript (React/Express) developer. Being an avid fan of functional programming, he’s on his way to take on Haskell/Purescript as his main professional languages.
Read more
  • 0
  • 0
  • 1490

article-image-5-reasons-learn-reactjs
Sam Wood
04 Nov 2016
3 min read
Save for later

5 Reasons to Learn ReactJS

Sam Wood
04 Nov 2016
3 min read
Created by Facebook, ReactJS has been quick to storm onto the JavaScript stage. But is it really worth picking up, especially over more established options like Ember or Angular? Well, here's five great reasons to learn React. 1. If you want to build high performance JS mobile apps If you're a JavaScript developer, there are a bunch of options to choose from if you find yourself wanting to develop for mobile. Cordova, Ionic and more all allow you to use your JavaScript coding skills to build apps for Android and iOS. But React Native - React's spin off platform for mobile dev - is very different. Rather than running a JavaScript powered app in your mobile web browser, React Native compiles to the native code of the respective mobile OS. What does this mean? It means you get to develop entirely with JavaScript without passing on any performance compromise to your users. React Native apps run as swift and seamlessly as those built using native tools like XCode. 2. If your web app regularly changes state If your single-page web app needs to react regularly to state changes, you'll want to seriously consider React (the clue is in the name). React is built on the idea of minimizing DOM operations - they're expensive, and the least you have the better. Instead, React gives you a virtual DOM to render too instead of the actual DOM. This allows the minimum number of DOM operations you need to achieve the new desired state. With React, you can often stop worrying about DOM performance altogether. It's simple to re-render an entire page all the time as soon as your state changes. This means your code is smaller, sleeker, and simpler - and simpler code is bug free code. 3. If you want to easily reuse your code One of React's biggest features are container components. What are those? The idea is simple - a container does the data fetching, and then renders it into a corresponding sub-component that shares the same name. This means that you separate your data fetching from your rendering concerns entirely - making your React code much, much more reusable in different projects. 4. If you like control over your stack It's a common refrain among those asked to compare React to other JavaScript Frameworks like Angular - React's not a framework! It's a library. What does this mean? It means you can have complete control over your stack. Don't like a bit of React? You can always swap in from another JavaScript library and run things your way. 5. If you want to be in on the ground floor of the next big thing There are thousands of experienced Angular developers, many of whom are progressing to learn Angular 2. In contrast, React is young, scrappy, and hungry - and you'll be hard pressed to find anyone with more than a year or so's experience using it. Despite that, employer demand is rising fast. If you're looking to stand out from the crowd, React skills are an excellent thing to have on your resume. Commit to building your next development project with React. Or maybe Angular. Whatever you decide, pick up some of our very best content from 7th to 13th November 2016 here.
Read more
  • 0
  • 0
  • 4972
article-image-thinking-outside-skybox-developing-cinematic-vr-experiences-web
Paul Dechov
04 Nov 2016
6 min read
Save for later

Thinking outside the Skybox – Developing Cinematic VR Experiences for the Web

Paul Dechov
04 Nov 2016
6 min read
We as a society are on the road to mastering a powerful new medium. Even just by dipping your toes into the space of virtual reality, it is easy to find vast unexplored territory and a wealth of potential in this next stage of human communications. As with movies around the turn of the 20th century, there is a mix of elements borrowed from older media, combined with wild experimentation. Thanks to accelerating progress, it will not take too long to home in on the unique essence of immersive virtual worlds. Yet we know there are still so many untapped opportunities around creating, capturing, and delivering these worlds and shaping the experiences inside them. Chris Milk is a pioneer in both the content and technology aspects of virtual reality filmmaking. He produced a short film "Evolution of Verse", available (along with many others) through his company's app Within, that tells a story of the emergence of the VR medium in abstract form. It bravely tests the limits of immersive art, serves as an astounding introductory illustration of the potential for visual effects in virtual reality, and contains an exhilarating homage to the humanity's initial experience of the movies. On the Web—the platform that is the most open, connected, and accessible for both creators and audiences—we now have the opportunity to take our creations across the barrier of limitations established by the format of the screen and the norms of the platform. We can bring along those things that work well on a flat screen, and will come to rethink them as we experiment with the newfound ability of the audience to convincingly perceive our work in the first person. What is Virtual Reality? First, a quick survey of consumer head-mounted displays in rough order of increasing power and price: Mobile: Google Cardboard, Samsung Gear VR, Google Daydream (coming soon) Tethered: Sony PlayStation VR (coming soon), Oculus Rift, HTC Vive It would be helpful to analyze the medium of virtual reality in terms of its various immersive facets: Head-tracking: Most crucially, the angle of your head is mapped in real time to that of the virtual camera. A major hallmark of our visual experience of physical reality is turning your head in order to look around or to face something. This capability is leveraged by the category of 360° videos (note that the name evokes looking around in a circle, but generally you can look up and down as well). This is a radical departure in terms of cinematography, as directing attention via a rectangular frame is no longer an option. Stereoscopy: Seeing depth as a 3rd spatial dimension is a major sensory advantage for perceiving complex objects and local surroundings. Though there is a trade-off between depth perception and perspective distortion, 3D undeniably contributes a sense of presence, and therefore a strong element of immersion. 3D can be compared with stereo sound, which also was looked upon for decades as a novelty and a gimmick before achieving ubiquity in the 1960s (on that note, positional audio is another significant factor in delivering an immersive experience). Isolation: Blocking out ambient light noises and distractions that would dilute the visual experience, akin to noise-blocking or noise-canceling headphones. Motion tracking: Enables so-called "room-scale VR", which allows you to move through virtual environments and around virtual objects. This can greatly heighten the fidelity of experience, and comes with some interesting challenges. This capability is currently only available with the HTC Vive but we will soon see it on mobile, put forward by Google's Project Tango. Button: Works as a mouse click in combination with a cursor in the center of your field of view. Motion-tracked hand controller: Again, this is currently a feature of the HTC Vive only, but Oculus and Google's Daydream will be coming out with controllers, as will PlayStation VR using PlayStation Move controllers. Even fairly basic applications of these controllers like Tilt Brush have immense appeal. Immersive Graphics on the Web There is one sequence of "Evolution of Verse" that is reminiscent of one of my favorite THREE.js demos, of flocking birds. In pursuit of advanced hardware acceleration, this demo uses shaders in order to support the real-time navigation and animation of thousands of birds (i.e. boids) at once. A-Frame is a high-level wrapper around THREE.js that provides a simple, structured interface to immersive 3D graphics. An advanced feature of A-Frame materials allows you to register shaders (a low-level drawing subroutine), and attach these materials to entities. Aside from the material, any number of other components could be added to the entity (lights, sounds, cameras, etc.), including perhaps one that encapsulates the boid navigation logic using a custom component (which are simple to write). A-Frame has great support for importing 3D objects and scenes (downloaded from Sketchfab or clara.io, for instance) using the obj-model and collada-model components. An Asset Management System is also included, for caching and preprocessing assets like models and textures or images and videos. In the future it will also support the up-and-coming glTF standard runtime format for objects and scenes—comparable to the PNG format but for 3D content (with support for animation, however). This piece lives as an external component for now, one of many external resources available as part of the large A-Frame ecosystem. From flocks of birds to the many other techniques explored and validated by the WebGL community, immersive cinematic storytelling on the web has a bright future ahead. During the filming of "The Birds", Alfred Hitchcock found it necessary to insist on literally immersing his lead actress (and surrogate for the audience) in flocks of predatory birds. Perhaps in a more harmless way yet motivated by similar dramatic ambition, creators of web experiences will insist on staging their work to take full advantage of the new paradigm of simulated reality, and it is no longer too early to get started. Image credits: * left: cutestockfootage.com * right: NYT VR app About the author Paul Dechov is a visualization engineer at TWO-N.
Read more
  • 0
  • 0
  • 1071

article-image-what-restfulapi
Tess Hsu
04 Nov 2016
4 min read
Save for later

What is a RestfulAPI?

Tess Hsu
04 Nov 2016
4 min read
Well, RestfulAPI is not only about the action but also about the whole React concept design pattern. Let’s describe it like this: roles: 1.”Browser” == “client”, 2.”Server” == “shop”, action: Client ask shop : ” please give me Black chocolate”, shop receive from Client: “ok, got your message, I will send you Black chocolate” result: shop send client Black chocolate You can see from this scenario that there are: roles, verb(ask, receive), result And they all play a drama called the ”Restful API”. So, the Restful API is the Representation State Transfer—it’s about the architectural style. There’s no standard on how you build your restful API (like, if you write a drama, there should be no standard pattern, right?). Instead, it’s all about the action between Browser | Server, and how you communicate with the object you want. Here is a sample graphic from the above scenario: So, RestfulAPI is this entire picture: Browser| Web Server, action by method: GET, POST, PUT, DELETE and so on, and a Browser that will react with the result to the web page. When you get this concept, you will see the key point is to communicate between the Browser | Web Server. So how do we do this? We primarily use http to connect both. So, now let’s look at what an http request is. What is an http request? With an http request, you can send the specific request and get the specific product you want directly, or you can send the common request and retrieve one of the products in the whole list. This will depend on your request and how complex the list of products happens to be. You can look at it this way: Nouns: Like whole collections of books, or resources, in http, this can be a URL such as http://book.com/books/{list}. Verbs: This is the action involving our bookshelf. So, you can get the book, remove the book… and in http, you can use GET, POST, PUT, DELETE. Content types: The book information is the author names, editor, publishing date, and so on. So, in http, you show this information with XML, JSON, or YAML format. Here is a graphic to show the whole concept: So if I want book 1, the URL to get book 1 can be http: //book.com/books/1. If I want the author information, such as the publish date from book 1, the URL can be http: //book.com/books/1?authorInfo=John&publishDate=2016. Here you can get authorInfo, so it is “John” and publish date is “2016” In JSON format, it looks like this: { “book”:[{ “authorInfo” : “John”, “publishDate”: “2016” }] } In this URL, you can use GET or POST to get the information. And if you want to know the difference between GET and POST, you can look here. So, the action should return the status, right? http has a status code to show the return status: 200: ok 404: Not Found 500: internal Server Error Here’s a link to look at the different statuses. How do you access the API through the web browser? You can actually open the web browser, click on the network and see the API running through the web event binding: For example: Here the resource (noun) is: https://stripe-s.pointroll.com/api/data/get/376?model=touareg&zip=55344 The method is get The content type resource is 376?model=touareg&zip=55344, which in JSON format is: And how will this information show in our final destination, the web browser? You can use any language, but here I use JavaScript: First, load the above resource. Second, define the condition: if process 1 is successful, you need to get the offers list deal, and get the title “499/month for 36 months.” If process 1 is not successful, then use the web browser to show the status. And finally, show the result in the website. The concept code will look like this: $('#main-menu a').click(function(event) { event.preventDefault(); $.ajax(this.href, { success: function(data) { $('#main').text(data.offers.deal.title); }, error: function() { $('#notification-bar').text('An error occurred'); //console.log("fn xhr.status: " + xhr.status); } }); }); So, the final expectation is to show the title “499/month for 36 months” on the browser’s web page. Conclusion The basic Restful API concept simply reduces the communication work between the frontend and backend. I recommend that you explore using it yourself and see how useful it can be. About the author Tess Hsu is a UI design and frontend programmer and can be found on GitHub.
Read more
  • 0
  • 0
  • 1573

article-image-what-api-economy
Darrell Pratt
03 Nov 2016
5 min read
Save for later

What is the API Economy?

Darrell Pratt
03 Nov 2016
5 min read
If you have pitched the idea of a set of APIs to your boss, you might have run across this question. “Why do we need an API, and what does it have to do with an economy?” The answer is the API economy - but it's more than likely that that is going to be met with more questions. So let's take some time to unpack the concept and get through some of the hyperbole surrounding the topic. An economy (From Greek οίκος – "household" and νέμoμαι – "manage") is an area of the production, distribution, or trade, and consumption of goods and services by different agents in a given geographical location. - Wikipedia If we take the definition of economy from Wikipedia and the definition of API as an Application Programming Interface, then what we should be striving to create is a platform (as the producer of the API) that will attract a set of agents that will use that platform to create, trade or distribute goods and services to other agents over the Internet (our geography has expanded). The central tenet of this economy is that the APIs themselves need to provide the right set of goods (data, transactions, and so on) to attract other agents (developers and business partners) that can grow their businesses alongside ours and further expand the economy. This piece from Gartner explains the API economy very well. This is a great way of summing it up: "The API economy is an enabler for turning a business or organization into a platform." Let’s explore a bit more about APIs and look at a few examples of companies that are doing a good job of running API platforms. The evolution of the API economy If you asked someone what an API actually was 10 or more years ago, you might have received puzzled looks. The Application Programming Interface at that time was something that the professional software developer was using to interface with more traditional enterprise software. That evolved into the popularity of the SDK (Software Development Kit) and a better mainstream understanding of what it meant to create integrations or applications on pre-existing platforms. Think of the iOS SDK or Android SDK and how those kits and the distribution channels that Apple and Google created have led to the explosion of the apps marketplace. Jeff Bezos’s mandate that all IT assets have an API at Amazon was a major event in the API economy timeline. Amazon continued to build APIs such as SNS, SQS, Dynamo and many others. Each of these API components provided a well-defined service that any application can use and has significantly reduced the barrier to entry for new software and service companies. With this foundation set, the list of companies providing deep API platforms has steadily increased. How exactly does one profit in the API economy? If we survey a small set of API frameworks, we can see that companies use their APIs in different ways to add value to their underlying set of goods or create a completely new revenue stream for the company. Amazon AWS Amazon AWS is the clearest example of an API as a product unto itself. Amazon makes available a large set of services that provide defined functionality and for which Amazon charges with rates based upon usage of CPU and storage (it gets complicated). Each new service they launch addresses a new area of need and work to provide integrations between the various services. Social APIs Facebook, Twitter and others in the social space, run API platforms to increase the usage of their properties. Some of the inherent value in Facebook comes from sites and applications far afield from facebook.com and their API platform enables this. Twitter has had a more complicated relationship with its API users over time, but the API does provide many methods that allow both apps and websites to tap into Twitter content and thus extend Twitter’s reach and audience size. Chat APIs Slack has created a large economy of applications focused around its chat services and built up a large number of partners and smaller applications that add value to the platform. Slack’s API approach is one that is centered on providing a platform for others to integrate with and add content into the Slack data system. This approach is more open than the one taken by Twitter and the fast adoption has added large sums to Slack’s current valuation. Along side the meteoric rise of Slack, the concept of the bot as an assistant has also taken off. Companies like api.ai are offering services to enable chat services with AI as a service. The service offerings that surround the bot space are growing rapidly and offer a good set of examples as to how a company can monetize their API. Stripe Stripe competes in the payments as a service space along with PayPal, Square and Braintree. Each of these companies offers API platforms that vastly simplify the integration of payments into web sites and applications. Anyone who has built an e-commerce site before 2000 can and will appreciate the simplicity and power that the API economy brings to the payment industry. The pricing strategy in this space is generally on a per use case and is relatively straightforward. It Takes a Community to make the API economy work There are very few companies that will succeed by building an API platform without growing an active community of developers and partners around it. While it is technically easy to create and API given the tooling available, without an active support mechanism and detailed and easily consumable documentation your developer community may never materialize. Facebook and AWS are great examples to follow here. They both actively engage with their developer communities and deliver rich sets of documentation and use-cases for their APIs.
Read more
  • 0
  • 0
  • 4430
article-image-mongodb-issues-you-should-pay-attention
Tess Hsu
21 Oct 2016
4 min read
Save for later

MongoDB: Issues You Should Pay Attention To

Tess Hsu
21 Oct 2016
4 min read
MongoDB, founded in 2007 with more than 15 million downloads, excels at supporting real-time analytics for big data applications. Rather than storing data in tables made out of individual rows, MongoDB stores it in collections made out of JSON documents. But, why use MongoDB? How does it work? What issues should you pay attention to? Let’s answer these questions in this post. MongoDB, a NoSQL database MongoDB is a NoSQL database, and NoSQL == Not Only SQL. The data structure is combined with keyvalue, like JSON. The data type is very flexible, but flexibility can be a problem if it’s not defined properly. Here are some good reasons you should use MongoDB: If you are a front-end developer, MongoDB is much easier to learn than mySQL, because the MongoDB base language is JavaScript and JSON. MongoDB works well for big data, because for instance, you can de-normalize and flatten 6 tables into just 2 tables. MongoDB is document-based. So it is good to use if you have a lot of single types of documents. So, now let’s examine how MongoDB works, starting with installing MongoDB: Download MongoDB from https://www.mongodb.com/download-center#community. De-zip your MongoDB file. Create a folder for the database, for example, Data/mydb. Open cmd to the MongoDB path, $ mongod –dbpath ../data/mydb. $ mongo , to make sure that it works. $ show dbs, and you can see two databases: admin and local. If you need to shut down the server, use $db.shutdownServer(). MongoDB basic usage Now that you have MongoDB on your system, let’s examine some basic usage of MongoDB, covering insertion of a document, removal of a document, and how to drop a collection from MongoDB. To insert a document, use the cmd call. Here we use employee as an example to insert a name, an account, and a country. You will see the data shown in JSON: To remove the document: db.collection.remove({ condition }), justOne) justOne: true | false, set to remove the first data, but if you want to remove them all, use db.employee.remove({}). To drop a collection (containing multiple documents) from the database, use: db.collection.drop() For more commands, please look at the MongoDB documentation. What to avoid Let’s examine some points that you should note when using MongoDB: Not easy to change to another database: When you choose MongoDB, it isn’t like other RDBMSes. It can be difficult to change, for example, from MongoDB to Couchbase. No support for ACID: ACID (Atomicity, Consistency, Isolation, Durability) is the basic item of transactions, but most NoSQL databases don’t guarantee ACID, so you need more technical skills in order to do this. No support for JOIN: Since the NoSQL database is non-relational, it does not support JOIN. Document limited: MongoDB uses stock data in documents, and these documents are in JSON format. Because of this, MongoDB has a limited data size, and the latest version supports up to 16 MB per document. Filter search has to correctly define lowercase/uppercase: For example: db.people.find({name: ‘Russell’}) and db.people.find({name: ‘ russell’}) are different. You can filter by regex, such as db.people.find({name:/Russell/i}), but this will affect performance. I hope this post has provided you with some important points about MongoDB which will help you decide if you have a big data solution that is a good fit for using this NoSQL database. About the author  Tess Hsu is a UI design and front-end programmer. He can be found on GitHub.
Read more
  • 0
  • 0
  • 4021

article-image-5-mistakes-web-developers-make-when-working-mongodb
Charanjit Singh
21 Oct 2016
5 min read
Save for later

5 Mistakes Web Developers Make When Working with MongoDB

Charanjit Singh
21 Oct 2016
5 min read
MongoDB is a popular document-based NoSQL database. Here in this post, I am listing some mistakes that I've found developers make while working on MongoDB projects. Database accessible from the Internet Allowing your MongoDB database to be accessible from the Internet is the most common mistake I've found developers make in the wild. Mongodb's default configuration used to expose the database to Internet; that is, you can connect to the database using the URL of the server it's being run on. It makes perfect sense for starters who might be deploying a database on a different machine, given how it is the path of least resistance. But in the real world, it's a bad default value that often is ignored. A database (whether Mongo or any other) should be accessible only to your app. It should be hidden in a private local network that provides access to your app's server only. Although this vulnerability has been fixed in newer versions of MongoDB, make sure you change the config if you're upgrading your database from a previous version, and that the new junior developer you hired didn't expose the database that connects to the Internet with the application server. If it's a requirement to have a database accessible from the open-Internet, pay special attention to securing the database. Having a whitelist of IP addresses that only have access to the database is almost always a good idea. Not having multiple database users with access roles Another possible security risk is having a single MongoDB database user doing all of the work. This usually happens when developers with little knowledge/experience/interest in databases handle the database management or setup. This happens when database management is treated as lesser work in smaller software shops (the kind I get hired for mostly). Well, it is not. A database is as important as the app itself. Your app is most likely mainly providing an interface to the database. Having a single user to manage the database and using the same user in the application for accessing the database is almost never a good idea. Many times this exposes vulnerabilities that could've been avoided if the database user had limited access in the first place. NoSQL doesn't mean "secure" by default. Security should be considered when setting the database up, and not left as something to be done "properly" after shipping. Schema-less doesn't mean thoughtless When someone asked Ronny why he chose MongoDB for his new shiny app, his response was that "it's schema-less, so it’s more flexible". Schema-less can prove to be quite a useful feature, but with great power comes great responsibility. Often times, I have found teams struggling with apps because they didn't think the structure for storing their data through when they started. MongoDB doesn’t require you to have a schema, but it doesn't mean you shouldn't properly think about your data structure. Rushing in without putting much thought into how you're going to structure your documents is a sure recipe for disaster. Your app might be small and simple and so easy right now, but simple apps become complicated very quickly. You owe your future self to have a proper well thought out database schema. Most programming languages that provide an interface to MongoDB have libraries to impose some kind of database schema on MongoDB. Pick your favorite and use it religiously. Premature Sharding Sharding is an optimization, so doing it too soon is usually a bad idea. Many times a single replica set is enough to run a fast smooth MongoDB that meets all of your needs. Most of the time a bad schema and (bad) indexing are the performance bottlenecks many users try to solve with sharding. In such cases sharding might do more harm because you end up with poorly tuned shards that don't perform well either. Sharding should be considered when a specific resource, like RAM or concurrency, becomes a performance bottleneck on some particular machine. As a general rule, if your database fits on a single server, sharding provides little benefit anyway. Most MongoDB setups work successfully without ever needing sharding. Replicas as backup Replicas are not backup. You need to have a proper backup system in place for your database and not consider replicas as a backup mechanism. Consider what would happen if you deploy the wrong code that ruins the database. In this case, replicas will simply follow the master and have the same damage. There are a variety of ways that you can use to backup and restore your MongoDB, be it filesystem snapshots or mongodump or a third party service like MMS. Having proper timely fire drills is also very important. You should be confident that the backups you're making can actually be used in a real-life scenario. Practice restoring your backups before you actually need them and verify everything works as expected. A catastrophic failure in your production system should not be the first time when you try to restore from backups (often only to find out you're backing up corrupt data). About the author Charanjit Singh is a freelance JavaScript (React/Express) developer. Being an avid fan of functional programming, he’s on his way to take on Haskell/Purescript as his main professional languages.
Read more
  • 0
  • 0
  • 2728