Chelpa Ferro is well known for squeezing a rhythmical sound from seemingly non-musical devices such as electric toothbrushes, drills, sewing machines, or juice makers, and using them in their installations and performances. At The Aldrich, the Acusma installation will fill the gallery with a sound resembling a group of people coming together to sing. However, the sound does not visually match the source, which turns out to be a series of beautiful Brazilian ceramic vases spread out on the gallery floor, with loudspeakers playing up to five different recorded voices inside each vessel.
Curator Mónica Ramírez-Montagut says, “In Chelpa Ferro’s work, the blend of high-tech equipment (speakers, cables, computers, and sophisticated computer programming) is integrated with traditional Brazilian crafts and domestic objects, providing a new and surprising visual representation of sound and conferring an aura of mystery upon these mundane objects.”
The post Acusma | Chelpa Ferro appeared first on generactive :: generative + interactive + art + design.
This installation done by Sam Van Aken in 2006 was essentially realized through the stacking and attaching of home stereo speakers. The speakers combined into a wall that played a random sampling of the quote “oh my God” which spanned an emotional spectrum from horror to sarcasm to sheer ecstasy. Through the use of a computer program designed by the artist and a seemingly infinite amount of speaker wire and extension cord, the quote was looped to individual speakers increasing one at a time until the entire wall screamed “oh my God” in a variety of tones and contexts at the same time. Remarkably, the range of human emotions the artist captured using only this singular quote was reflected through the viewers’ laughter, disgust, tears, etc.
Sam Van Aken is represented in New York by Ronald Feldman Fine Arts and Michael Klein Arts.
Sam Van Aken’s site
The post oh my god | Sam Van Aken appeared first on generactive :: generative + interactive + art + design.
How do we search for alien life if it’s nothing like the life that we know? At TEDxUIUC Christoph Adami shows how he uses his research into artificial life — self-replicating computer programs — to find a signature, a ‘biomarker,’ that is free of our preconceptions of what life is.
The post Christoph Adami | Finding life we can’t imagine appeared first on generactive :: generative + interactive + art + design.
Matthew Davidson aka Stretta is a talented guy. He’s an accomplished graphic artist and video producer/editor but we talked to him about his music. Stretta’s music is lush, modest and dreamy in the tradition of Brian Eno but it definitely has character of its own. Stretta comes from a tradition of modular synthesis that led him to discover Max/MSP.
The following is reposted from an interview by Marsha Vdovin here.
Tell me a little about your background.
I grew up in Iowa, and in 1988 I was trying to figure out what college to go to. I applied to one school, and found myself out on the East Coast, at Berklee College of Music. While I was at Berklee, one of my professors introduced me to Max. That was 1992. I’ve been using Max ever since.
I was interested in music and technology, so growing up in Iowa — pre-internet—all the information I had access to was books and magazines. It’s not at all like the hot and cold running information that we have on tap these days, where you can be anywhere in the world and learn about any subject very quickly.
As an example, I recently developed an interest in photography. This is a subject I knew nothing about. With the internet, and the instant turnaround of digital photography, being able to see other people’s work, inspect the meta data, I was able to learn a great deal in a short amount of time. Today, taking up any new interest like electronic music is far easier than when I started, I can tell you that much.
Digital photography opened up so much for me. I was able to do it without spending money, which was incredible. People were able to see my work all over the world, without me spending money.
Right. The spending money thing is analogous to what life was like before digital recording. A reel of tape costs money, so when you’re rolling, when you’ve hit the record button, there’s money at stake. That was the same thing with photography, you’re burning film. Now it doesn’t cost anything to drop the shutter, and now it doesn’t cost anything to play with digital audio. This accelerates the learning process.
I loved your Way-Geeky Time Line.
[Laughs.] You’ve done your research. Looking back, I realized that computers helped me express myself, so it was the correlation of operating systems or computers and what was happening in my life was significant. My first computer at home that I had access to was an Apple Lisa. The first time I used it, it was like touching the future. It was like someone got in a time machine, kidnapped a computer, then brought it back to the current day. I’d never experienced anything like that before.
I guess you could apply the oft-used term “paradigm shift.” I hate to use that word, but I can’t really think of anything better to describe what it was like going from computers with a green phosphorous screen to a black-and-white bitmap display where you click on objects and open them up. It’s not hyperbole to say that that changed my life.
I only had that machine for three months, then it was replaced by a 128K Macintosh. I was definitely one of the very early Mac users, and I’ve been fortunate in my choice of careers and work, as I’ve never had to use a Windows machine. Even in the dark days of the ’90s.
Did you take to Max right away?
I remember the night I was exposed to Max. Afterwards, I stood outside Berklee and put my head back and looked up at the sky, imagining how far this thing went. I recognized it and I knew it was one of those things I could spend years playing with, and never really see the end of the potential.
I’m very fortunate to watch Max evolve, sprouting audio, making all these technological leaps, and then the leap to OS X. It continues to be more capable, while retaining its essential core.
My favorite toy growing up was Legos, and I see a commonality. People I talk to who are into modular synthesizers, or into Max, there’s this commonality of “Did you play with Legos when you grew up?” “Yeah.” So it’s like that. It’s like Legos for music.
I like that granularity of control. It sits in this weird space, between commercial music applications and programming languages. Max is somewhere in between these two things. It allows you to create and customize your environment without programming and compiling.
I’m not a programmer—there is something about procedural languages, text-based, linear thinking that I don’t get along with. Max is non-linear, it moves in all directions, it’s real time. If you’re a guitarist, you understand how guitar pedals and patch cords work. You plug this into this and this other thing. I think this is a metaphor that is compatible with musicians.
If you understand these things, then understanding Max comes intuitively. When your creations evolve, and they tend to get more complex, you look back at it and you think, “How did I even understand this to begin with?” Because it looks really complicated. But then you break it down into smaller parts, and you can see how everything works.
So, is Max your primary music-making tool?
No. I would be surprised to hear anyone say that it is, simply because we live in this age where we have so many amazing tools available to musicians. There has never been a better time, from a technological standpoint, to be a musician. So while there are people who can dedicate themselves monk-like to a particular tool—Charles Cohen comes to mind. He’s been using a Buchla Music Easel for forty years. That’s his thing, and he knows it inside out. I admire that. We need people like that to be able to dedicate themselves to an instrument, but I don’t have that kind of dedication.
Have you gone the Jitter road? Have you combined your photography with Max?
[Laughs.] No. Like I said, Max is one of those things that you could spend the rest of your life dedicating yourself to the possibilities, and not exhaust them all. Based on my interest in video, and photography, I am definitely interested in in Jitter, but I haven’t come close to exhausting all the ideas I have for audio and MIDI within Max yet.
I think if someone came to me and said, “You know, we want you to do a live performance, and we want there to be video,” yeah, [laughs] I would fast-track my Jitter education.
Also, I don’t think there’re enough video-y applications for the Monome. The Monome is very audio-centric right now and there isn’t any good reason for that. The Monome, in conjunction with Jitter, would be very powerful.
What is it about the Monome that draws you to it?
Probably it was all the years of Max prior to it. You spend all this time with Max, and then you think to yourself, “Gosh, I really wish I had a controller to go along with this, to provide input and feedback.” People would come out with controllers, and they would be overly specific, or they wouldn’t do the thing that you wanted to do.
Then you started seeing people building their own controllers. Do-it-yourself kits became available, like the iCube, where you could hook up sensors and other analog sources and it would provide a MIDI output. That was a good move forward.
But when I saw the Monome, I just thought, “Oh, of course. I know exactly what I would do with that.” I think that’s partly why Max has been the default language of choice for Monome developers. They’re very well suited for each other. There are no labels of any sort, there’s no pre-determined, prescribed usage to the Monome. It is exactly what you’re looking for if you’d done anything in Max at all in the past.
How did the Max 5 change affect you?
I was using Max 4 up until about two or three months ago. I knew about Max 5, I knew what was going on with the environment, and I thought it was a very necessary, gutsy move for the company. And from what I could tell, at least two solid years of engineering, while adding no new features or capabilities to the software, redoing the user interface from the ground up, with a completely new framework. That’s the right way to do things.
If they were a larger company, they would find a way to screw it up. “You want to do what? For how long? That’s ridiculous.” But the change from Max 4 to Max 5 is as significant as the change OS 9 to OS X.
I was talking to Nick Rothwell as recently as September, telling him that I think it’s time for me to move into Max 5. He said, “Well, once you start using Max 5, you’re never going to go back.” Intellectually I believed him, but deep down, I was like, “Yeah, well we’ll just see about that”— because it is a big change. And oh, he was right. [laughs] I have a Max 5 license on one computer and a Max 4 license on another computer. I can’t bring myself to use Max 4 anymore.
I took to it in a fairly short amount of time. I think the main change, in terms of capabilities for Max 5, is being able to think in metric units. You can think in terms of 16th notes and 8th notes, and you don’t have to worry about milliseconds, or converting this to samples. That makes everything a lot easier. The idea of a global transport, and having access to metrical units is a really big deal for me. That was huge.
What’s your favorite object?
The Coll object.
And why is that?
I use it in every single patch. It’s familiar, like an old friend. I know it, and I know how to use it. I’m constantly learning new things about it. I think if you’re doing anything that manipulates or stores little bits of data, you have to get comfortable with the Coll object.
It seems to be pretty fast. I don’t have any problem extracting data in a timely fashion from it. If you have a Coll object and a Metro, you have the entire basis of a whole variety of step sequencers with a timed beat. You can do all sorts of magic with just those two things.
I don’t think it’s very sexy if you look at it. The object that I really liked before the Coll object was Table. It was more limited and approachable than Coll, but it had a graphical interface. You had two-dimensional data that you could manipulate directly with the mouse. But the Coll object is a lot more flexible. With the Monome, the face of the Coll object is now tangible.
Often, I’ll peek inside patches of other developers to see how they do things. Sometimes, they’re doing some sort of complex mathematical abstraction, which is satisfying from an intellectual point of view, but I’m more likely to simply dump the values I want into a data object like Coll. It kind of feels like cheating, but it gets the job done.
Have you been working in Max for Live?
Yeah. Most of the work I’ve been doing recently has been in Max for Live.
I think if Max by itself had a weak point, it would be that it doesn’t have a decent time line. A time line is one of those features that represents infinite mission creep. Ultimately, what you want is a full-featured DAW. So, putting Max inside a mature DAW is the best solution here.
Prior to Max for Live, most of the things I made were only of interest only to me, due to the dependencies involved. In the beginning, the dependencies were racks of hardware. At Berklee, I had codified what I learned about harmony into software, but to make it do anything you had to use external synthesizers and sound generators. Nothing ever made it out of the lab.
Later, you could use soft synths, but that still involved a lot of setup. You had to load the virtual instruments and effects, perform complicated routings, and deal with sync issues. It wasn’t really plug-and-play. I couldn’t take this, and then give it to someone else, and have it be as useful for them.
Now with Max for Live, suddenly the things that I make are portable to other people. I can make these little tools, these little performance things that take real time input, and then outputs something that’s musically interesting.
That also has ramifications for live performance. I did a recent video using Max 5. It involved a software harmonizer, effects, recording multiple tracks into a DAW, and complex MIDI routings, and that’s like, four different applications, all combined. It took a good hour or so to set this one performance piece up. So, it’s not easy for me to reproduce that performance again, let alone string together a set of pieces to perform. Now, with Max for Live, you can put all these combinations of elements together, all of your soft synths, all your routing, all your effects in this one environment, and save it. Then you can recall it. I can’t tell you what a huge thing that is.
Max for Live also addresses the issue of a DAW trying to be all things to all people…
But they try to be.
Well, they try to be, and then that’s where the user interface breaks down. The application sprouts these weird appendages, and after two years of that and you end up with something that becomes incomprehensible and un-maintainable. Especially if you’re not willing to take the time to go in and refine the user interface, or piss off your existing user base by throwing out old, crusty features that a small percentage of your user base relies on. But if you jettisoned that code, then you could bring your DAW forward, develop faster and make your code more reliable.
So what Max for Live does for Live users is it allows people to create this customized environment to do the things that they need to do, without bringing the entire DAW down.
So you can see how Max and Live need each other. Max gets a fully featured timeline, and Live gets a mature environment for user customization.
So, you’re giving away your Max for Live ‘Monome suite’?
It’s free for anyone to download. I’m beta testing a new release right now that adds support for multiple Monomes. So if you have multiple Monomes, you can have one that’s switching between these applications, and another one that’s switching between another set of applications.
I’m replacing all the user interface objects with Live objects, and that enables parameters to be stored and automated. I just sent out a beta of that yesterday, and I’ll hopefully be getting some bug reports and actually making that an official release in the near future.
So, what is Stretta?
Stretta began as a vanity record label. I bought the domain back around 1996. It became clear to me that people weren’t buying music, so the idea of a record label really didn’t make sense anymore. Simultaneously, I was noticing the importance of personal branding on the internet because there are so many forces competing for attention. If you release something, you’ll see a huge spike of interest that falls off rapidly. It doesn’t matter if you spent two years working on something or two days, you’ll see the same spike, then everyone moves on to the next thing. From that I concluded that the better strategy is to release smaller things on a more consistent basis, and this is where having a memorable brand becomes useful.
‘Matthew Davidson’ is not very memorable, and it is kind of long. So, since I already had the domain—and short, pronounceable domain names are a rare commodity these days—I use Stretta. It is short and memorable and consistent across all these social media platforms.
The post Matthew Davidson aka Stretta appeared first on generactive :: generative + interactive + art + design.
Korinsky Studio consists of Abel, Carlo and Max Korinsky. They mainly focus on their shared passion: exploring the possibilities of using sound in vertical surfaces. 3845 m/s is their newest installation using their own software, in a former coal power plant in Berlin. See the Korinsky Studio website for more information about their work.
Documentary about the work of Berlin-based art collective “Korinsky – Atelier für vertikale Flächen” and their sound installation 3845 m/s
The post Korinsky Studios 3845 m/s Sound Installation appeared first on generactive :: generative + interactive + art + design.
Building Sound is a project instigated by Ella Finer and Fabrizio Manco, PhD candidates at Roehampton University, London.
Building Sound is an on-going research experiment in ways to describe and articulate experiences of sound making and reception within theatre, theory and practice.
As both doctoral studies are concerned with an interrogation of sound within theatrical space, the aim of building sound is to provide thinkers and practitioners an opportunity to offer their own ideas from their respective practices about sound within an actual and a virtual space.
These sites in which voices will interweave with their own particular social and cultural definitions of aurality will it is hoped create an open forum in which ideas can complement, collide and construct.
As well as hosting information about the project and storing archival sound from the symposium, the form of this website itself is the result of an attempt to question how to build a website beyond text.
Both the symposium and website are investigations into interdisciplinary dialogues about working with sound.
The post Building Sound Research Experiment appeared first on generactive :: generative + interactive + art + design.
For Casey Reas, software is the most natural medium to work with. He uses code to express his thoughts—starting with a sketch, composing it in code, and witnessing the imagery that it ultimately creates. We visit his studio to see how he uses color to convey emotion and how his programming language Processing is closing the gap between software and object.
The Creators Project is a partnership between Intel and VICE: http://thecreatorsproject.com/
The post How To Draw With Code | Casey Reas appeared first on generactive :: generative + interactive + art + design.
Hello World! is a documentary series on three programming languages -Processing, Open Frameworks y Pure data- that have increased the role of coding in the practice of artists, designers and creators around the world.
The series explores the creative possibilities expanded by these open source tools and the importance of their growing online communities.
Hello World! Processing!
Code and programming allow us to simulate phenomena of the world, approach their data with different scales and perspectives and lead to new and unusual creative processes.
Hello World! Processing is a documentary on creative coding that explores the role that ideas such as process, experimentation and algorithm play in this creative field featuring artists, designers and code enthusiasts. Based on a series of interviews to some of the leading figures of the Processing open programming platform community, the documentary is built itself as a continuous stream of archived references, projects and concepts shared by this community.
The post Hello World! Processing Documentary appeared first on generactive :: generative + interactive + art + design.
download this track for free at https://soundcloud.com/stretta/holocene
Hardware: monome arc4. Software: electric dharma wheels. More about the monome arc4 can be found at http://monome.org Please go to monome.org for more information, including how to buy.
I received a production-run arc4 with the final firmware on Friday. This signaled a mad scramble to update my work for that and the latest serialosc with arc support so it’ll be ready when people start receiving their units in a few days. So what do I do on Saturday? Make a new app, of course. Sure, that totally sounds like the responsible thing to do.
After receiving the arc4, I thought it might be a good idea to produce an example that demonstrates a ‘bank’ of encoder values that you can switch between. That gave birth to an application idea involving triggering modal notes from a pool of probabilities across three octaves of scale degrees. There is a separate bank of pitches depending on clockwise or counterclockwise rotation so you can shift the harmony with a simple gesture. The weighting of scale degrees is programmable and editable in real time on screen or with a MIDI controller. This allows for a more controlled structuring of compositional development over longer periods of time. The speed of the rotation determines how often a note is triggered, and can also be used as a modulation parameter for the FM synthesis engine.
Relevant synthesis parameters are also editable on the arc as the notes are triggered. The state of these parameters is overlaid on the LEDs, so interesting patterns emerge when this mode is engaged. There was a really awesome bug where switching editing modes also transposed the output modally, so I built in a score feature that allows you to advance a programmed chord progression with a button push.
A sit-the-arc-in-your-lap-and-doodle app has been on my mind a lot and I have at least three good starts in this area, but other priorities have often pushed these out of the way. The prototype arc2 I had lacked the mounting bracket for the USB cable and the logic board was floating free inside the enclosure, so I always had to use it (carefully) on a stationary, flat surface. It is really nice to have an arc that can be moved around or used in the lap. My cat disagrees.
I recorded this video, holocene, as a demonstration of this app, which I’m calling electric dharma wheels. This is the raw output from the electric dharma wheels, with some Eos reverb added after the fact.
The post electric dharma wheels appeared first on generactive :: generative + interactive + art + design.
Be sure to post a comment and like this page if you dig it!
The post Processing Sketch to Processing JS Converter appeared first on generactive :: generative + interactive + art + design.