Due to an accident with my website, all the blog pictures have been deleted from the server. Over the next couple of weeks I will try to restore as many as possible, but due to my strict clean up nature when it comes to local files, most files I may not have around any more.
We have already discussed several topics related to OpenAL. This week we will investigate how we can structure the concepts of OpenAL to fit nicely with the object oriented paradigm, so we can write elegant C# code to play sounds and manage sound effects.
In this post we will quickly go over the three main concepts of OpenAL as discussed in my introduction post: listeners, sources, and buffers. In addition, we will also introduce a type for sound data, which comes in handy for sound effect management.
Last time we stepped through the process of loading and playing a sound using OpenAL. In the example we only played a single sound once. In practical applications, in particular games, will have a larger set of sounds they use, and they will be playing these sounds multiple times. They might even play the same sound multiple times simultaneously.
In this post we are going to look at managing a larger set of sound effects.
In an earlier blog post I introduced the workings of OpenAL. Knowing the theory still doesn’t mean you know how to actually get a sound engine working. Something I found myself struggling with a lot when I started working with OpenAL for our game project was figuring out all the steps required to get a sound to play.
In this blog post I will be talking about the very basics of playing a sound. In the following blog post, I will approach the problem from the other end, and we will look at some of the high level management code that can be used to manage a large amount of sounds.
Last weekend was spent on creating a game in 48 hours for the Ludum Dare. The final results can be found here. In this post I will highlight on of the techniques I used during the game jam: random dungeon generation.
After two days of working on this game, Dungeon Hunter is now finished (or… as finished as it could be within 48 hours) and available for download. I will be posting my evaluation in a few weeks when the ratings are in as well. For now, enjoy the game and manual below!
As promised, here are my day one builds for the Ludum Dare!
It has been a while since I last posted a blog post. On Twitter I already announced that due to some personal circumstances and vacation I was unable to keep up with my blog. However, starting from next week I will go back to my schedule of publishing a new post every other week.
The upcoming weekend I will be participating in the latest instance of the Ludum Dare. I am still really happy with my result from last time and it will be really difficult to top that, but that isn’t stopping me from trying. I’ll be posting my progress on Twitter, try to post my day one builds here, and I may even livestream my process over on my Twitch channel, so keep an eye out for that. Below I will give a short overview of my plans for this Ludum Dare specifically, so read on for some tips and whatever if you’re interested.
Space, the final frontier and a recurring theme in games. The funny thing about space is that things rarely move in a straight line – then again, that all depends on your reference point. Gravity – and in particular orbital physics – forms an important aspect when it comes to movement in space. Many space games include some form of orbital mechanics or even depend on them as gameplay mechanic.
In this post I will talk how orbits in games can be implemented. In particular we will tackle some non-trivial orbits. This post will be very goal-focussed, and we will only discuss the underlying mathematics minimally, since it is primarily aimed to provide an easy reference to implement orbits yourself.
Last week we introduced the concept of components as framework for programming game objects. As opposed to last week, this week we will discuss a more technical object: how to add communication between components. I will first describe a few use cases and why we are interested, before discussing several approaches and their advantages and disadvantages.
In an average game there will be a lot of different types of game objects. Since these game objects often share behaviour, it makes sense to categorise them to avoid code duplication. The common solution — in object-oriented languages, that is — is to use inheritance. Inheritance does have limitations however and on further inspection it does not appear to be the obvious choice at all. Switching to a component-based approach resolves a lot of issues. Developers using Unity will probably recognise the alternative I will discuss in this post, and I would lie if I was not heavily inspired by it. My main focus however will be to explain why a component-based approach is used. By explaining the philosophy behind components I am hoping to give existing Unity (or another game engine) users a better picture of the intended use, and new users an interesting new take on game programming.
Finite State Machines (FSM for short) are a very common phenomenon in game programming. Even if you have never heard of the term, it is unlikely that you have never used it before. While the concept behind Finite State Machines is simple, simple implementations can often lead to code that becomes hard to maintain. In this post we will focus on slowly abstracting from an ad hoc implementation to a simple framework we can apply in many different circumstances.
The observant under the readers will probably have noticed the missing post of last week. Since I am planning on getting my master’s degree very soon, I have been working in setting a few things into motion, and together with several deadlines, they kept me from writing a new post. To make up, I will give a bit of an update of some personal projects, and next week I will be back with a full post.
Graphics and gameplay are two important pillars to build a game. Audio – both music and sound effects – is another important part of games. Many players and reviewers do not focus on audio much and indeed, if the music and sound effects fit with what is happening on the screen, they will fit naturally in the player’s experience. If you would play a game without audio though, you would immediately feel that something is wrong.
While a lot of game developers – especially programmers – are at least vaguely aware of the workings of an update loop or graphics code, audio is often something that enters the equation at a very late stage. In this post I will introduce the general concepts of basic audio programming using OpenAL.
I am currently participating in the Ludum Dare. The theme for this weekend is unconventional weapon. Below you can find my day one builds. Feel free to leave me feedback however you can.
- Web (unsupported in Chrome by default, enable NPAPI plugins as workaround)
- WebGL (seems to be broken by Unity compiler, feel free to try)
- Mac OSX
And if you don’t want to download playable versions, feel free to look at these gifs:
The goal of the game is to nudge satellites to hit enemies. Right now there is no generation of satellites or enemies left, so I guess you can only play around with the ones that are there (click the city with the mouse to fire a skyscraper). Right now the biggest problem I am dealing with is getting the gameplay to be fun. It is a bit annoying to actually hit the satellite and hitting an enemy seems to be very impossible.
I am considering changing the controls so the player can just hold the mouse button on a satellite and then drag in the direction they want to push it. Then having a skyscraper launch to make that happen might be a bit annoying, but it makes the gameplay a lot easier and playable. If you have any ideas about how to make it work, please let me know.
- « Newer
- Older »