Here are few projects I worked on. Click on a picture for more detailed information.
Videos : In-game music and heartbeat had been muted for demonstration purpose.
“Dead by Daylight” asymmetrical horror multiplayer game
• Live production with regular content releases and reworks.
• Audio content: full ownership of killer characters since mid 2019, the Demogorgon (weapons, powers, grunts, foley, ui/hud...)
• Environments: full ownership of maps since Stranger Things DLC (Zoning, reverbs, 2d amb, 3D randoms, emitters, scripted events...)
• Implementation with Unreal Engine 4 and Audiokinetic Wwise through Blue Prints, animation sequences...
• Achieved massive optimization for mobile platforms, setup optim good practices.
• Major Refactoring: naming convention, WUs & sound banks refactoring, memory pool size assignment, bus structure debug.
• Tech/ system upgrade: rework of reverberations of all maps.
• Live project maintenance: regular engines upgrades, constant debug, optimization and refactor.
• Profiling/ Optimizing for all platforms: PC, Consoles, Switch, Android/iOS.
• Audio post production pipe-line setup for videos: Conforming, mixing good practices, standard delivery, archiving.
• Marketing and in-game videos editing and mixing.
“Ellen’s Road to riches slots and casino games”. Client : Double Down Interactive.
• Content creation for a live game: UI and 2D animations, slots SFX, music editing.
• Implementation in Unity and Wwise.
• Optimization and porting to Facebook website using CRIWare ADX2 audio engine.
“Assassin Creed Rebellion” mobile game.
Client : Ubisoft.
• Content creation: UI, character animation.
• Implementation on Unity with proprietary audio tool.
• Supported audio tool development: debugging and feature request before shipping and project hand-off.
“West World” branded management mobile game.
Client : HBO/Warner Bros.
• Content creation: UI, character animation, ambiances, music beds.
• Implementation on Unity with proprietary audio tool.
• Additionally, focused on sound mix, organizational and optimization refactoring before final shipping.
This is a re-edited version of the Star Citizen squadron 42 trailer.
After having heavily re-edited the video for pace and narrative purpose, I chose to make a sound FX version only, to promote my work as a sound editor/designer.
Will you be able to spot the processed seagull screams?
Or the Wilhelm scream ? (Well that one is pretty obvious)
After that, I decided to export most of the sound effects and implement them in wWise, In order to create demos that you can discover watching the next videos.
Also, check out how much it is different from the original !
Link to the original video:
vimeo.com/51129845
In this demo, i promote the starship engine interactive model I've built within Wwise, featuring cockpit components shaking when enduring g-force and stress, proportionally to ship health.
Also featuring an audible feedback of the ship when making extreme turns.
- The engine model has been made with 3 loops placed in 3D. Each one of them has its pitch/volume/LPF/HPF modulated by the engine thrust RTPC.
- The dynamic shaking object sounds are made with cross fading model and controlled via RTPC. Basically cockpit components shakes more and more depending on the engine thrust. Also note that it is also proportional to the ship condition, the worst it gets, the more you'll hear it!
This would be even more immersive if each sound would correspond to a physical object inside the cockpit, with its own condition!
In this video you 'll see a wwise project playing a spaceship simulation demo.
The project uses HDR audio, traditional ducking, convolution cockpit reverb, in addition Wwise output loudness is very close to -23 LUFS.
- Important radio communications duck the mix. I chose ducking over HDR because those dialogue are really important and have to be super intelligible. We can also imagine they could be VoIP during multiplayer.
- Spaceship AI (synthetised female voice) is HDR processed and is ducking the ship engine sounds and weapons sfx.
Here is another video to promote my work as a sound designer and integrator. I have recorded most of the sounds used here. I have then used all the assets to integrate them into wWise in a creative, dynamic way.
For exemple when a clay jar is destroyed. The audio engine, wWise, will play 3 layers of sound.
- One "attack" and more percussive layer will play ahead.
- Then a second layer of clay being broken will play.
- Finally a sample of clay falling debris will play.
Each sample would be randomly picked among many samples. Each of them would be randomly and softly pitched and have a slight level variation.
Also the time those samples would be triggered would also be variating a little every time the event is triggered, in consequence you can break a thousand jars and it will never sound the same !
Same technique has been used for the attack sound, walking, door breaking etc...
Also i encourage you to download the project to check the "shield" system ( shield up, down, blocking, shield throw) which uses a synthesized loop to illustrate magic shield protection being used. Also, when the shield is being thrown a tremolo effect is being synchronized with the player-to-shield distance RTPC, creating a nice and lively effect, in addition to pitch and volume automation.
Way better than the original IMO ! :-)
Please download the Wwise project and check it out:
drive.google.com/open?id=0B5Npu-9qz0nvfkN3RERmaF9PQzR0cUxrcUFhMVVoNkNlUWlNRldDZUVzOC02aXpZQU1ZbFk
Best.