summaryrefslogtreecommitdiff
path: root/manual/xml/monitoring.xml
blob: 8479e16b8af3dd56f08857eeb5795991b4d803da (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
<?xml version="1.0" standalone="no"?>

<!DOCTYPE section PUBLIC "-//OASIS//DTD DocBook XML V4.4//EN" "http://www.oasis-open.org/docbook/xml/4.4/docbookx.dtd" [

]>

<section id="sn-monitoring">
  <title>Monitoring</title>
  <para>
    If you are recording an acoustic instrument or voice with no
    pre-existing recorded material as an accompaniment, then you probably
    don't need to worry about monitoring. Just make sure you've made the
    right <link linkend="sn-jack">connections</link> and you should be ready
    to record without reading this section.
  </para>

  <para>
    However, if a musician is playing an instrument (it doesn't matter what
    kind) while listening to some pre-existing material, then it is
    important that some mechanism exists to allow her to hear both her own
    playing and the accompaniment. The same is true in a slightly different
    way if the instrument makes no sound until the electrical signal it
    creates has been amplified and fed to some loudspeakers. Listening to
    the performance in this way is called monitoring.
  </para>

  <para>
    So, if you are recording an electrical or software instrument/signal,
    and/or the musician wants to listen to existing material while
    performing, then you need to ensure that signal routing is setup to
    allow monitoring. You have 2 basic choices:
  </para>

  <section id="hardware-monitoring">
    <title>Hardware Monitoring</title>
    <para>
      Hardware monitoring uses the capabilities of your audio interface to
      route an incoming signal (e.g. someone playing a guitar into a
      microphone) to an output connection (for example, the speaker outputs,
      or a dedicated analog monitoring stereo pair). Most audio interfaces
      can do this, but how you get them to do so, and what else they can do
      varies greatly. We can divide audio interfaces into 3 general
      categories:
    </para>

    <itemizedlist>
      <listitem>
        <para>
          relatively simple, typically stereo, devices that allow the signal
          being recorded to be routed back to the main outputs (most
          "consumer" audio interfaces fit this description, along with
          anything that provides an "AC97-compliant CODEC")
        </para>
      </listitem>

      <listitem>
        <para>
          multichannel devices that allow a given input channel to be routed
          back to its corresponding output channel (the main example is the
          RME Digi9652)
        </para>
      </listitem>

      <listitem>
        <para>
          multichannel devices that allow any input channel, along with any
          playback channel, to be routed to any output channel (the RME HDSP
          and various interfaces based on the envy24/ice1712 chipsets, such
          as the M-Audio Delta 1010, EZ-8 and various Terratec cards)
        </para>
      </listitem>
    </itemizedlist>

    <section id="monitoring-consumer-audio-interfaces">
      <title>"Consumer" audio interfaces and monitoring</title>
      <para>
        For interfaces in the first category, there is no standard method of
        getting the signal routing correct. The variations in the wiring of
        hardware mixing chips, and the capabilities of those chips, means
        that you will have to get familiar with a hardware mixer control
        program and the details of your audio interface. In the simple
        cases, simply increasing the level named "Line In" or "Mic" in the
        hardware mixer control program will suffice. But this is not a
        general rule, because there is no general rule.
      </para>

      <para>
        The following diagram shows a fairly typical AC97-based audio
        interface schematic:
      </para>
      <mediaobject>
        <imageobject>
          <imagedata fileref="images/simplemixer.png"/>
        </imageobject>
      </mediaobject>
      <para>
        Notice:
      </para>

      <itemizedlist>
        <listitem>
          <para>
            there are multiple input connections, but only one can be used
            as the capture source
          </para>
        </listitem>

        <listitem>
          <para>
            it is (normally) possible to route the input signals back to the
            outputs, and independently control the gain for this "monitored"
            signal
          </para>
        </listitem>

        <listitem>
          <para>
            it may or may not be possible to choose the playback stream as
            the capture stream
          </para>
        </listitem>
      </itemizedlist>
    </section>

    <section id="monitoring-prosumer-audio-interfaces">
      <title>High end "prosumer" interfaces and monitoring</title>
      <para>
        For the only interface in the second category, the RME Digi9652
        ("Hammerfall"), the direct monitoring facilities are simplistic but
        useful in some circumstances. They are best controlled using
        <emphasis>JACK hardware monitoring</emphasis>.
      </para>

      <para>
        When using one of the interfaces in the third category, most people
        find it useful to use hardware monitoring, but prefer to control it
        using a dedicated hardware mixer control program. If you have an RME
        HDSP system, then <command>hdspmixer</command> is the relevant
        program. For interfaces based on the envy24/ice1712/ice1724
        chipsets, such as the Delta1010, Terratecs and others,
        <command>envy24ctl</command> is the right choice. Both programs
        offer access to very powerful matrix mixers that permit many
        different variations on signal routing, for both incoming signals
        and the signals being played back by the computer. You will need to
        spend some time working with these programs to grasp their potential
        and their usage in different situations.
      </para>

      <para>
        The following diagram gives a partial view of the monitoring
        schemantics for this class of audio interface. Each input can be
        routed back to any output, and each such routing has its own gain
        control. The diagram only shows the routings for "in1" to avoid
        becoming completely incomprehensible.
      </para>
      <mediaobject>
        <imageobject>
          <imagedata fileref="images/matrixmixer.png"/>
        </imageobject>
      </mediaobject>
    </section>
  </section>

  <section id="jack-hardware-monitoring">
    <title>JACK hardware monitoring</title>
    <para></para>
  </section>

  <section id="software-monitoring">
    <title>Software monitoring</title>
    <para>
      Much simpler than hardware monitoring is "software monitoring". This
      means that any incoming signal (say, through a Line In connector) is
      delivered to software (such as Ardour) which can then deliver it back
      to any output it chooses, possibly having subjected it to various
      processing beforehand. The software can also mix signals together
      before delivering them back to the output. The fact that software
      monitoring can blend together incoming audio with pre-recorded
      material while adjusting for latency and other factors is the big plus
      for this method. The major downside is latency. There will always be a
      delay between the signal arriving at your audio interface inputs and
      it re-emerging from the outputs, and if this delay is too long, it can
      cause problems for the performer who is listening. They will sense a
      delay between pressing a key/pulling the bow/hitting the drum etc. and
      hearing the sound it produces.
    </para>

    <para>
      However, if your system is capable of low latency audio, its likely
      that you can use software monitoring effectively if it suits your
      goals.
    </para>
  </section>

  <section id="controlling-monitoring-within-ardour">
    <title>Controlling monitoring choices within Ardour</title>
    <para></para>
  </section>
<!--
	<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" 
		href="Some_Subsection.xml" />
	-->
</section>