25 May 2015

Game Performance: Geometry Instancing

Posted by Shanee Nishry, Games Developer Advocate

Imagine a beautiful virtual forest with countless trees, plants and vegetation, or a stadium with countless people in the crowd cheering. If you are heroic you might like the idea of an epic battle between armies.

Rendering a lot of meshes is desired to create a beautiful scene like a forest, a cheering crowd or an army, but doing so is quite costly and reduces the frame rate. Fortunately this is possible using a simple technique called Geometry Instancing.

Geometry instancing can be used in 2D games for rendering a large number of sprites, or in 3D for things like particles, characters and environment.

The NDK code sample More Teapots demoing the content of this article can be found with the ndk inside the samples folder and in the git repository.

Support and Extensions

Geometry instancing is available from OpenGL ES 3.0 and to OpenGL 2.0 devices which support the GL_NV_draw_instanced or GL_EXT_draw_instanced extensions. More information on how to using the extensions is shown in the More Teapots demo.

Overview

Submitting draw calls causes OpenGL to queue commands to be sent to the GPU, this has an expensive overhead which may affect performance. This overhead grows when changing states such as alpha blending function, active shader, textures and buffers.

Geometry Instancing is a technique that combines multiple draws of the same mesh into a single draw call, resulting in reduced overhead and potentially increased performance. This works even when different transformations are required.

The algorithm

To explain how Geometry Instancing works let’s quickly overview traditional drawing.

Traditional Drawing

To a draw a mesh you’d usually prepare a vertex buffer and an index buffer, bind your shader and buffers, set your uniforms such as a World View Projection matrix and make a draw call.

To draw multiple instances using the same mesh you set new uniform values for the transformations and other data and call draw again. This is repeated for every instance.

Drawing with Geometry Instancing

Geometry Instancing reduces CPU overhead by reducing the sequence described above into a single buffer and draw call.

It works by using an additional buffer which contains custom per-instance data needed by your shader, such as transformations, color, light data.

The first change to your workflow is to create the additional buffer on initialization stage.

To put it into code let’s define an example per-instance data that includes a world view projection matrix and a color:

C++

struct PerInstanceData
{
 Mat4x4 WorldViewProj;
 Vector4 Color;
};

You also need to the structure to your shader. The easiest way is by creating a Uniform Block with an array:

GLSL

#define MAX_INSTANCES 512

layout(std140) uniform PerInstanceData {
    struct
    {
        mat4      uMVP;
        vec4      uColor;
    } Data[ MAX_INSTANCES ];
};

Note that uniform blocks have limited sizes. You can find the maximum number of bytes you can use by querying for GL_MAX_UNIFORM_BLOCK_SIZE using glGetIntegerv.

Example:

GLint max_block_size = 0;
glGetIntegerv( GL_MAX_UNIFORM_BLOCK_SIZE, &max_block_size );

Bind the uniform block on the CPU in your program’s initialization stage:

C++

#define MAX_INSTANCES 512
#define BINDING_POINT 1
GLuint shaderProgram; // Compiled shader program

// Bind Uniform Block
GLuint blockIndex = glGetUniformBlockIndex( shaderProgram, "PerInstanceData" );
glUniformBlockBinding( shaderProgram, blockIndex, BINDING_POINT );

And create a corresponding uniform buffer object:

C++

// Create Instance Buffer
GLuint instanceBuffer;

glGenBuffers( 1, &instanceBuffer );
glBindBuffer( GL_UNIFORM_BUFFER, instanceBuffer );
glBindBufferBase( GL_UNIFORM_BUFFER, BINDING_POINT, instanceBuffer );

// Initialize buffer size
glBufferData( GL_UNIFORM_BUFFER, MAX_INSTANCES * sizeof( PerInstanceData ), NULL, GL_DYNAMIC_DRAW );

The next step is to update the instance data every frame to reflect changes to the visible objects you are going to draw. Once you have your new instance buffer you can draw everything with a single call to glDrawElementsInstanced.

You update the instance buffer using glMapBufferRange. This function locks the buffer and retrieves a pointer to the byte data allowing you to copy your per-instance data. Unlock your buffer using glUnmapBuffer when you are done.

Here is a simple example for updating the instance data:

const int NUM_SCENE_OBJECTS = …; // number of objects visible in your scene which share the same mesh

// Bind the buffer
glBindBuffer( GL_UNIFORM_BUFFER, instanceBuffer );

// Retrieve pointer to map the data
PerInstanceData* pBuffer = (PerInstanceData*) glMapBufferRange( GL_UNIFORM_BUFFER, 0,
                NUM_SCENE_OBJECTS * sizeof( PerInstanceData ),
                GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_RANGE_BIT );

// Iterate the scene objects
for ( int i = 0; i < NUM_SCENE_OBJECTS; ++i )
{
    pBuffer[ i ].WorldViewProj = ... // Copy World View Projection matrix
    pBuffer[ i ].Color = …               // Copy color
}

glUnmapBuffer( GL_UNIFORM_BUFFER ); // Unmap the buffer

And finally you can draw everything with a single call to glDrawElementsInstanced or glDrawArraysInstanced (depending if you are using an index buffer):

glDrawElementsInstanced( GL_TRIANGLES, NUM_INDICES, GL_UNSIGNED_SHORT, 0,
                NUM_SCENE_OBJECTS );

You are almost done! There is just one more step to do. In your shader you need to make use of the new uniform buffer object for your transformations and colors. In your shader main program:

void main()
{
    …
    gl_Position = PerInstanceData.Data[ gl_InstanceID ].uMVP * inPosition;
    outColor = PerInstanceData.Data[ gl_InstanceID ].uColor;
}

You might have noticed the use gl_InstanceID. This is a predefined OpenGL vertex shader variable that tells your program which instance it is currently drawing. Using this variable your shader can properly iterate the instance data and match the correct transformation and color for every vertex.

That’s it! You are now ready to use Geometry Instancing. If you are drawing the same mesh multiple times in a frame make sure to implement Geometry Instancing in your pipeline! This can greatly reduce overhead and improve performance.

21 May 2015

Always-on and Wi-Fi with the latest Android Wear update

Posted by Wayne Piekarski, Developer Advocate

A new update to Android Wear is rolling out with lots of new features like always-on apps, Wi-Fi connectivity, media browsing, emoji input, and more. Let’s discuss some of the great new capabilities that are available in this release.

Always-on apps

Above all, a watch should make it easy to tell the time. That's why most Android Wear watches have always-on displays, so you can see the time without having to shake your wrist or lift your arm to wake up the display. In this release, we're making it possible for apps to be always-on as well.

With always-on functionality, your app can display dynamic data on the device, even when the app is in ambient mode. This is useful if your app displays information that is continuously updated. For example, running apps like Endomondo, MapMyRun, and Runtastic use the always-on screen to let you keep track of how long and far you’ve been running. Zillow keeps you posted about the median price of homes nearby when you’re house-hunting.

Always-on functionality is also useful for apps that may not update data very frequently, but present information that’s useful for reference over a longer period of time. For example, Bring! lets you keep your shopping list right on your wrist, and Golfshot gives you accurate distances from tee to pin. If you’re at the airport and making your way to your gate, American Airlines, Delta, and KLM let you keep all of your flight info a glance away on your watch.

Note: the above apps will not display always-on functionality on your watch until you receive the update for the latest version of Android Wear.

Always-on functionality works similar to watch faces, in that the power usage of the display and processor is kept to a minimum by reducing the colors and refresh rate of the display. To implement an always-on Activity, you need to make a few small changes to your app's AndroidManifest.xml, your app’s build.gradle, and the Activity to declare that it supports ambient mode. A code sample and documentation are available to show you how it works. Be sure to tune in to the livestream at Google I/O next week for Android Wear: Your app and the always-on screen.

Wi-Fi connectivity and cloud sync

Many existing Android Wear devices already contain hardware support for Wi-Fi, and this release enables software support for Wi-Fi. The saved Wi-Fi networks on your phone are copied to your watch during setup, and your watch automatically connects to those Wi-Fi networks when it loses Bluetooth connection to your phone. Your watch can then connect to your phone over the Internet, even if they’re not on the same Wi-Fi network.

You should continue to use the Data Layer API for all communications between the watch and phone. By using this standard API, your app will always work, no matter what kind of connectivity the user’s wearable supports. Cloud sync also introduces a new virtual node in the Data Layer called the cloud node, which may be returned in calls to getConnectedNodes(). Learn more in the Multi-wearable support section below.

Multi-wearable support

The release of Google Play services 7.3 now allows support for multiple wearable devices to be paired simultaneously to a single phone or tablet, so you can have a wearable for fitness, and another for dressing up. While DataItems will continue to work in the same way, since they are synchronized to all devices, working with the MessageApi is a little different. When you update your build.gradle to use version 7.3 or higher, getConnectedNodes() from the NodeApi will usually return multiple nodes. There is an extra virtual node added to represent the cloud node used to communicate over Wi-Fi, so all developers need to deal with this situation in their code.

To help simplify finding the right node among many devices, we have added a CapabilityApi, allowing your nodes to announce features they provide, for example downloading images or music. You can also now use the ChannelApi to open up a connection to a specific device to transfer large resources such as images or audio streams, without having to send them to all devices like you would when embedding assets into data items. We have updated our Android Wear samples and documentation to show the best practices in implementing this.

MediaBrowser support

The Android 5.0 release added the ability for apps to browse the media content of another app, via the android.media.browse API. With the latest Android Wear update, if your media playback app supports this API, then you will be able to browse to find the next song directly from your watch. This is the same browse capability used in Android Auto. You implement the API once, and it will work across a variety of platforms. To do so, you just need to allow Android Wear to browse your app in the onGetRoot() method validator. You can also add custom actions to the MediaSession that will appear as controls on the watch. We have a Universal Media Player sample that shows you how to implement this functionality.

Updates to existing devices

The latest version of Android Wear will roll out via an over-the-air (OTA) update to all Android Wear watches over the coming weeks. To take advantage of these new features, you will need to use targetSdkVersion 22 and add the necessary dependencies for always-on support. We have also expanded the collection of emulators available via the SDK Manager, to simulate the experience on all the currently available devices, resolutions, and shapes, including insets like the Moto 360.

In this update, we have also disabled support for apps that use the unofficial, activity-based approach for displaying watch faces, as announced in December. These watch faces will no longer work and should be updated to use the new watch face API.

Since the launch of Android Wear last summer, Android Wear has grown into a platform that gives users many possibilities to personalize their watches, with a variety of shapes and styles, a range of watch bands, and thousands of apps and watch faces. Features such as always-on apps and Wi-Fi allow developers even more flexibility to give users amazing experiences with Android Wear.

20 May 2015

Android Developer Story: Wooga’s fast iterations on Android and Google Play

Posted by Leticia Lago, Google Play team

In order to make the best possible games, Wooga works on roughly 40 concepts and prototypes per year, out of which 10 go into production, around seven soft launch, and only two make it to global launch. It’s what they call “the hit filter." For their latest title, Agent Alice, they follow up with new episodes every week to maintain player interest and engagement over time.

The ability to quickly iterate both live and under development games is therefore key to Wooga’s business model — Android and Google Play provide them the tools they need and mean that new features and updates are made on Android first, before they get to other platforms.

Find out more from Sebastian Kriese, Head of Partnerships, and Pal Tamas Feher, Head of Engineering, and learn how the iteration features of Android and Google Play have contributed to successes such as Diamond Dash, Jelly Splash, and Agent Alice.

You can find out more about building successful games businesses on Android and Google Play at Google I/O 2015: in person, on the live stream, or session recordings after the event. Check out the following:

  • Developers connecting the world through Google Play - Hear how the new mobile ecosystem including Google Play and Android are empowering developers to make good on the dream of connecting the world through technology to improve people's lives. This session will be live streamed.
  • Growing games with Google — In addition to consoles, PC, and browser gaming, as well as phone and tablet games, there are emerging fields including virtual reality and mobile games in the living room. This talk covers how Google is helping developers across this broad range of platforms. This session will be live streamed.
  • What’s new in the Google Play Developer Console - Google Play’s new launches will help you acquire more users and improve the quality of your app. Hear an overview of the latest features and how you can start taking advantage of them in the Developer Console.
  • Smarter approaches to app testing — Hear about the new ways Google can help maximize the success of your next app launch with cheaper and easier testing strategies.

06 May 2015

Exercise or Games? Why Not Both!

Posted by Alice Ching, Google Engineer

We are pleased to announce the release of Games in Motion, an open source game sample to demonstrate how developers can make fun games using Google Fit and Android Wear. Do you ever go on a jog and feel like there is a lack of incentive to help you run better? What if you were a secret agent and had to use your speed and your nifty gadget to complete missions?

With Games in Motion, you can enhance your exercise with missions and actions on your Android Wear device, while logging your jogs to the cloud.

Games in Motion is written in Java programming language using Android Studio. It demonstrates multiple Android technologies.

  • Android Wear bridges notifications from a phone or tablet to a paired Android Wear device. The notifications are stacked so we can show multiple stats at the same time.
  • Google Fit API collects and processes fitness data and sessions. This allows us to use the fitness data to show user progress. All exercise sessions done in Games in Motion will be recorded to Google Fit as well.
  • Google Play Games Services is used to create and unlock achievements.
  • Several different Android audio APIs are integrated.
  • JUnit tests are present for the data-driven parser, which demonstrates how unit testing can be done within Android Studio.

You can download the latest open source release from GitHub. We hope to inspire similar Android games, where multiple different form factors are combined for a fun experience.

05 May 2015

Android Developer Story: The Hunt -- Increased engagement with material design and Google Play

Posted by Laura Della Torre, Google Play team

We've been in San Francisco talking with the team from The Hunt — a style and product sharing community. They've recently lifted the rate at which Android users start hunts to 20 percent after successfully implementing material design in the app, which is a 30 percent improvement over other platforms. As The Hunt’s Product Designer Jenny Davis puts it, “it felt like having a team of design experts on hand,” which lets them focus on what matters to the Android user.

But as we find out, that's not the whole story. Beta testing — managed from the Google Play Developer Console — also allowed them to iterate design and features daily. Based on feedback, they introduced the floating action button, which helped boost new hunts and helpful responses from the community. This speed and freedom is something the team thought possible only with their mobile website, until they started working with the Android tools.

Watch the video to discover more about how design and rapid iteration has been key to building a strong community.

Learn about using the tools that have helped improve user engagement for The Hunt:

  • Material design — learn more about material design and how it helps you create beautiful, engaging apps.
  • Beta testing — discover how to easily deliver test versions of your app to users for feedback before release.

29 April 2015

Integrate Play data into your workflow with data exports

Posted by Frederic Mayot, Google Play team

The Google Play Developer Console makes a wealth of data available to you so you have the insight needed to successfully publish, grow, and monetize your apps and games. We appreciate that some developers want to access and analyze their data beyond the visualization offered today in the Developer Console, which is why we’ve made financial information, crash data, and user reviews available for export. We're now also making all the statistics on your apps and games (installs, ratings, GCM usage, etc.) accessible via Google Cloud Storage.

New Reports section in the Google Play Developer Console

We’ve added a Reports tab to the Developer Console so that you can view and access all available data exports in one place.

A reliable way to access Google Play data

This is the easiest and most reliable way to download your Google Play Developer Console statistics. You can access all of your reports, including install statistics, reviews, crashes, and revenue.

Programmatic access to Google Play data

This new Google Cloud Storage access will open up a wealth of possibilities. For instance, you can now programmatically:

  • import install and revenue data into your in-house dashboard
  • run custom analysis
  • import crashes and ANRs into your bug tracker
  • import reviews into your CRM to monitor feedback and reply to your users

Your data is available in a Google Cloud Storage bucket, which is most easily accessed using gsutil. To get started, follow these three simple steps to access your reports:

  1. Install the gsutil tool.
    • Authenticate to your account using your Google Play Developer Console credentials.
  2. Find your reporting bucket ID on the new Reports section.
    • Your bucket ID begins with: pubsite_prod_rev (example:pubsite_prod_rev_1234567890)
  3. Use the gsutil ls command to list directories/reports and gsutil cp to copy the reports. Your reports are organized in directories by package name, as well as year and month of their creation.

Read more about exporting report data in the Google Play Developer Help Center.

Note about data ownership on Google Play and Cloud Platform: Your Google Play developer account is gaining access to a dedicated, read-only Google Cloud Storage bucket owned by Google Play. If you’re a Google Cloud Storage customer, the rest of your data is unaffected and not connected to your Google Play developer account. Google Cloud Storage customers can find out more about their data storage on the terms of service page.

28 April 2015

There's a lot to explore with Google Play services 7.3

gps

Posted by Ian Lake, Developer Advocate

Today, we’re excited to give you new tools to build better apps with the rollout of Google Play services 7.3. With new Android Wear APIs, the addition of nutrition data to Google Fit, improvements to retrieving the user’s activity and location, and better support for optional APIs, there’s a lot to explore in this release.

Android Wear

Google Play services 7.3 extends the Android Wear network by enabling you to connect multiple Wear devices to a single mobile device.

While the DataApi will automatically sync DataItems across all nodes in the Wear network, the directed nature of the MessageApi is faced with new challenges. What node do you send a message to when the NodeApi starts showing multiple nodes from getConnectedNodes()? This is exactly the use case for the new CapabilityApi, which allows different nodes to advertise that they provide a specific functionality (say, the phone node being able to download images from the internet). This allows you to replace a generic NodeListener with a more specific CapabilityListener, getting only connection results and a list of nodes that have the specific functionality you need. We’ve updated the Sending and Receiving Messages training to explore this new functionality.

Another new addition for Android Wear is the ChannelApi, which provides a bidirectional data connection between two nodes. While assets are the best way to efficiently add binary data to the data layer for synchronization to all devices, this API focuses on sending larger binary data directly between specific nodes. This comes in two forms: sending full files via the sendFile() method (perfect for later offline access) or opening an OutputStream to stream real time binary data. We hope this offers a flexible, low level API to complement the DataApi and MessageApi.

We’ve updated our samples with these changes in mind so go check them out here!

Google Fit

Google Fit makes building fitness apps easier with fitness specific APIs on retrieving sensor data like current location and speed, collecting and storing activity data in Google Fit’s open platform, and automatically aggregating that data into a single view of the user’s fitness data.

To make it even easier to retrieve up-to-date information, Google Play Services 7.3 adds a new method to the HistoryApi: readDailyTotal(). This automatically aggregates data for a given DataType from midnight on the current day through now, giving you a single DataPoint. For TYPE_STEP_COUNT_DELTA, this method does not require any authentication, making it possible to retrieve the current number of steps for today from any application whether on mobile devices or on Android Wear - great for watch faces!

Google Fit is also augmenting its existing data types with granular nutrition information, including protein, fat, cholesterol, and more. By leveraging these details about the user’s diet, developers can help users stay more informed about their health and fitness.

Location

LocationRequest is the heart of the FusedLocationProviderApi, encapsulating the type and frequency of location information you’d like to receive. An important, but small change to LocationRequest is the addition of a maximum wait time for location updates via setMaxWaitTime(). By using a value at least two times larger than the requested interval, the system can batch location updates together, reducing battery usage and, on some devices, actually improving location accuracy.

For any ongoing location requests, it is important to know that you will continue to get good location data back. The SettingsApi is still incredibly useful for confirming that user settings are optimal before you put in a LocationRequest, however, it isn’t the best approach for continual monitoring. For that, you can use the new LocationCallback class in place of your existing LocationListener to receive LocationAvailability updates in addition to location updates, giving you a simple callback whenever settings might have changed which will affect the current set of LocationRequests. You can also use FusedLocationProviderApi’s getLocationAvailability() to retrieve the current state on demand.

Connecting to Google Play services

One of the biggest benefits of GoogleApiClient is that it provides a single connection state, whether you are connecting to a single API or multiple APIs. However, this made it hard to work with APIs that might not be available on all devices, such as the Wearable API. This release makes it much easier to work with APIs that may not always be available with the addition of an addApiIfAvailable() method ensuring that unavailable APIs do not hold up the connection process. The current state for each API can then be retrieved via getConnectionResult(), giving you a way to check at runtime whether an API is available and connected.

While GoogleApiClient’s connection process already takes care of checking for Google Play services availability, if you are not using GoogleApiClient, you’ll find many of the static utility methods in GooglePlayServicesUtil such as isGooglePlayServicesAvailable() have now been moved to the singleton GoogleApiAvailability class. We hope the move away from static methods helps you when writing tests, ensuring your application can properly handle any error cases.

SDK is now available!

Google Play services 7.3 is now available: get started with updated SDK now!

To learn more about Google Play services and the APIs available to you through it, visit the Google Play services section on the Android Developer site.