We present and discuss the Agent Designer, a system that enables users of digital audio workstations to generate novel high-level structures for their compositions based on previous examples. The system uses variable-order Markov models and rule induction to learn both temporal relations and structural relations between parts in a piece of music. As is usual in machine learning, however, the quality of the learning can be improved greatly by users specifying relevant features. The Agent Designer therefore points to important design and human–computer interaction problems, as well as algorithmic challenges. We present a number of studies that help to understand how effective the Agent Designer is and how we might design a user interface that best enables users to obtain quality results from the system. We show that the Agent Designer is effective for certain musical styles, such as loop-based electronic music, and that we as expert users can design agents that produce the most effective results. We also note that it remains a challenge to automate this process fully.