Capcom Portfolio
Chittering Mountain (inspired by Kunitsugami)
Tools used: Cubase, Arturia Pigments, AAS String Studio/Chromaphone, BFD3 (Kabuki Percussion), Pianoteq, Kontakt
(nb. YouTube compression has not been very kind to this one...)
I really enjoyed Kunitsugami, and felt that its soundtrack was a triumph of atmosphere and tone painting. I tried to replicate some of the feel in this piece, using the characteristic piano sounds and breathy 'noise' sounds as well as traditional Japanese instruments (Sho, Shakuhachi synth), while taking some influence from the harmonies of Scriabin, a personal favorite composer and one who used quartal harmonies such as those in the Kunitsugami soundtrack.
祇(くにつがみ)というゲームのイメージで作曲した曲です。 プレイした時サントラのピアノ・和楽器・ノイズの使い方ですごく感じて、そのように1曲を作りました。 ピアノでリードして、ちょっとずつずれてるハーモニーで奇妙な雰囲気を組み立て、フィルターでノイズが息の音色のようにして、オートパンで「遠くに敵がいます」イメージで入れました。
I also used mix techniques such as an autopanned bitcrushed delay to create a sense of space, and to suggest a lurking presence on the perimeter, outside the 'safe' area suggested by the warm piano tone.
I would like to make this longer and introduce a more percussive section, given the time.
バトルシーンように太鼓を入れたほうがベストと思いますので、未来に時間があればもう一つのセクションを書く希望です。
'Coisa no. 0'
Tools used: Dorico, VSL Strings & Brass, Berlin Free Orchestra, East-West Hollywood Orchestra, VST (Post-processing)
This piece was written while testing out functionality of Dorico 5, a score-writing tool released by Steinberg. I was curious how much of the normal VST arrangement process could be cut short using a tool like this, and found it quite expedient to use Dorico as its 'realistic' playback got a pretty good result in combination with orchestral libraries' in-built mix positions etc.
このピースは「Dorico」というソフトをテストするように書きました。 普通に作曲する時DAWでキースイッチで楽器の使い方を変更ができますが、Doricoが楽譜でテクニークを書いたら自動的にVSTがちゃんとそのテクニークで変更させます。 ソロならキースイッチのほうはベストと思いますがオーケストラといえばDoricoのほうが本当にスピードで書けました。 また、直接Doricoからオーディオをセーブができるので、デモで助かりました。
Musically, this piece was inspired by one of my favorite albums of all time - 'Coisas' by Moacir Santos - with a little bit of mid-century orchestral music thrown in. I'm still considering this a work-in-progress as the middle section needs some expansion, but I'm quite satisfied with the rhythmic/harmonic feel achieved in most of the piece.
音楽のジャンルに関して、ブラジルの伝説のジャズ作曲者Moacir Santosイメージで書きまして、ちょっとだけ50年代のオーケストラ風で混ぜるように書きました。 真ん中はまだ短いと思いますのでちょっとずつ治すで頑張ります。
'Screaming!'
Tools used: Pro Tools, SPEAR (spectral resynthesis), Soundhack, Zoom H4N
This piece was composed for the final assignment of an 'Electroacoustic Music' class I took during university, which aimed to introduce the computer-aided sound design and composition techniques of European composers who worked at IRCAM etc. We were required to use field recorders to gather sounds, and to create our pieces entirely through processing those sounds.
このピースは大学の「エレクトロアクースティックミュージック」というクラスのファイナルテストために作りました。 「エレクトロアクースティック」というは、ヨーロッパでIRCAMなどのサウンドラボでテープやパソコンを使いながら録音したサウンドをモーフィングして新たなサウンドを作るジャンルです。一番人気がある作曲者というは、Pierre Schaeferとか、Bernard ParmegianiやJonathan Harveyなどので、その人たちのように書きました。
Unfortunately I found this quite hard, and my teacher would routinely tell me the field recordings I gathered were 'routine' or 'cliche'. Feeling frustrated, I jokingly asked a classmate to scream into my Zoom H4N field recorder after class. It was good for stress relief, but I soon realized that the sound had interesting tonal qualities, a characteristic envelope, and carried connotations that made it interesting source material.
最初は自分で録音したサウンドは先生に見せて、よければ続けるですけれど毎回先生が「この音は普通でしょう」・「この音もう使ったこと多いよ」とかで言われて、結局ちょっとイライラになって同級生に「大声でス悲鳴してくれませんか」を頼んで、そのサウンドはやっとよかった。 そして、スペクトラムでエディトして、ピッチ・長さ・音色が変更できまして、このピースの全サウンドは最初の悲鳴音から作りました。
All of the sounds in this entire piece were created by processing those two recordings of screams, each about a second long. I spent about 10 hours making various versions of the sound using spectral processing tools - time-stretching and lengthening and shortening; changing the pitch envelope; changing the formants, sometimes cutting out individual partials from the spectrogram using SPEAR, or shifting entire portions of the sound throughout the pitch domain. I also used Soundhack to convolve the edited sounds against each other, which revealed interesting sub-rhythms created by the pitch patterns in the original sound material.
色んなサウンドを集まった後に、Pro Toolsで「ずっとテンション上がる」ようにアレンジして完成になりました。
Finally, I arranged the sounds in Pro Tools as though tape editing, trying to follow a structure I had sketched of continually building and shocking the listener, varying the amplitude enevelope between continuous and abrupt sound.
In the end, the piece was played during a small concert at the Sydney Opera House, over stereo speakers. It was the only electroacoustic piece in the concert, and having no performers on stage forced the audience to engage fully with the audio. Reactions varied; some people were scared, others were thrilled, some were amused. One person said the bass frequencies made their hearing aid malfunction.
結局このピースはシドニーオペラハウスで流れまして、オーディエンスは喜んでくれてよかった。 ミュージックと言うよりサウンドアートですが、こんなストラクチャーで普通の人でも楽しめると思います。 しかも、作った時から5秒の録音から全ピースを作られるのは良い勉強になりました。
I learned a great deal from making this piece, and I still receive compliments on it. I learned how even a single sound can contain enough material to form an entire piece, and how to take a sound that has inherent 'meaning' and change it enough to create a different meaning to the sound, but I also learned how to structure abstract sound to create musical events, and to structure those in entertaining ways without having to rely on traditional harmonic or melodic content.
'Freakmode' - Orangeblood
Tools used: Pro Tools, VSTs (FM8, Kontakt, (BFD drums), Geist (sampler), Korg Collection, post-processing (saturation plugins etc.))
Written for the hip hop/G-Funk-inspired hack-slash JRPG 'Orangeblood' as a trial track. As the developer enjoyed this track a lot, I was given the privilege of composing the entire soundtrack.
The direction I received for this project was "write 90s-style rap songs without the vocals, so that players will want to rap along with the track themselves". That meant that the results were going to be a little repetitive by nature, so I tried to make sure each new verse and chorus had a new element in it to keep things interesting. By the end of this particular song I think there are at least 6 riffs going at once. I really tried to follow the 'funk' idea of "every instrument is actually a drum".
What I really enjoyed about this project was embracing 'cheesy' sounds and sometimes 'off' notes - in the right context they were more 'right' than 'wrong'. I also enjoyed programming my own breakbeats and then resampling them. Though it sounds looped, every note in the soundtrack was played by hand...
'Block World' - -Yume Nikki Dream Diary-
Tools used: Pro Tools, VSTs (FM8, Kontakt, Delays), FFT analysis
In 2018 I was hired to work as a Sound Creator at Active Gaming Media, where my first responsibility was to create sound assets and music for the free post-release DLC of 'Yume Nikki -Dream Diary-'. As a big fan of the original game this was very exciting but also terrifying, as its unique music and sound design are well-loved by its fans. I wanted to do it justice as much as possible.
However, recreating the original Block World music proved a challenge. It was essentially an endless 2-second loop of four dissonant percussion notes in a row. In the original freeware game this endless loop was maddening and at first infuriating, although you eventually tuned it out. As a product users would pay money for, I was told to do basically anything else than the original music!
Eventually I decided to try a 'spectralist' approach. The original percussion sound was harmonically complex, and I thought that by analysing its partials I could potentially derive a structure and 'zoom in' to the sound, essentially turning a percussion instrument into an evolving pad. I ran an FFT analysis of the original sound and created a new sound in FM8 using the exact inharmonic partial frequencies of the original sound, and tweaked the envelope of each partial to fade in gradually and separately. The resulting sound 'felt' like the original, but you would never know unless you sped it up about 800%.
Now that I had a workable sound, I wrote a very brief melody derived from the secundal motion of the original melody, and then worked on creating a 'space' using delays, feedback and sound positioning. The space needed to feel oppressive, filling in the gaps between notes with textural detail. Post-processing helped achieve this feeling with a subtle layer of tape compressing.
Players were able to navigate the new platforming challenges in this space without being irritated by the original loop, and many of them expressed that the music 'felt' similar, but couldn't quite articulate why or how. I was quite pleased and felt it was appropriate for our game, which was an 'alternate take' on the original.