js实现声学测试
使用Web Audio API进行基础声学测试
Web Audio API是浏览器内置的音频处理接口,适合实现频率响应、音量检测等基础声学测试功能。
初始化音频上下文
创建AudioContext对象并获取用户麦克风权限:

const audioContext = new (window.AudioContext || window.webkitAudioContext)();
navigator.mediaDevices.getUserMedia({ audio: true })
.then(stream => {
const microphone = audioContext.createMediaStreamSource(stream);
microphone.connect(audioContext.destination);
});
频率分析
使用AnalyserNode获取实时频率数据:
const analyser = audioContext.createAnalyser();
analyser.fftSize = 2048;
microphone.connect(analyser);
const bufferLength = analyser.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
analyser.getByteFrequencyData(dataArray); // 更新数据到dataArray
实现音量检测
通过计算信号均方根(RMS)值检测音量:

const processor = audioContext.createScriptProcessor(2048, 1, 1);
microphone.connect(processor);
processor.onaudioprocess = event => {
const input = event.inputBuffer.getChannelData(0);
let sum = 0;
for (let i = 0; i < input.length; i++) {
sum += input[i] * input[i];
}
const rms = Math.sqrt(sum / input.length);
console.log(`当前音量: ${rms}`);
};
生成测试音信号
使用OscillatorNode生成特定频率的测试音:
const oscillator = audioContext.createOscillator();
oscillator.type = 'sine'; // 正弦波
oscillator.frequency.value = 1000; // 1kHz频率
oscillator.connect(audioContext.destination);
oscillator.start();
校准与可视化
结合Canvas绘制实时频谱图:
const canvas = document.getElementById('visualizer');
const ctx = canvas.getContext('2d');
function draw() {
requestAnimationFrame(draw);
analyser.getByteFrequencyData(dataArray);
ctx.clearRect(0, 0, canvas.width, canvas.height);
dataArray.forEach((value, i) => {
ctx.fillRect(i * 2, canvas.height - value, 2, value);
});
}
draw();
注意事项
- 需在HTTPS环境或localhost下使用麦克风权限
- 移动端可能存在延迟问题
- 复杂声学测试(如THD测量)需结合WebAssembly实现更高精度计算





