如果陌生的话也没关系

时间:2019-10-14 11:43来源:计算机论坛
websocket探求其与语音、图片的力量 2015/12/26 · JavaScript· 3 评论 ·websocket 原来的文章出处:AlloyTeam    提及websocket想比大家不会目生,假使目生的话也没提到,一句话回顾 “WebSocket pr

websocket探求其与语音、图片的力量

2015/12/26 · JavaScript · 3 评论 · websocket

原来的文章出处: AlloyTeam   

提及websocket想比大家不会目生,假使目生的话也没提到,一句话回顾

“WebSocket protocol 是HTML5一种新的说道。它完毕了浏览器与服务器全双工通讯”

WebSocket绝比较守旧那多少个服务器推能力差不离好了太多,我们得以挥手向comet和长轮询这个本事说拜拜啦,庆幸大家生活在具备HTML5的有时~

那篇文章大家将分三片段查究websocket

率先是websocket的宽泛使用,其次是一丝一毫自身制作服务器端websocket,最后是最首要介绍利用websocket制作的八个demo,传输图片和在线语音聊天室,let’s go

一、websocket常见用法

这里介绍三种自己感到大面积的websocket完结……(留意:本文创建在node上下文景况

1、socket.io

先给demo

JavaScript

var http = require('http'); var io = require('socket.io'); var server = http.createServer(function(req, res) { res.writeHeader(200, {'content-type': 'text/html;charset="utf-8"'}); res.end(); }).listen(8888); var socket =.io.listen(server); socket.sockets.on('connection', function(socket) { socket.emit('xxx', {options}); socket.on('xxx', function(data) { // do someting }); });

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var http = require('http');
var io = require('socket.io');
 
var server = http.createServer(function(req, res) {
    res.writeHeader(200, {'content-type': 'text/html;charset="utf-8"'});
    res.end();
}).listen(8888);
 
var socket =.io.listen(server);
 
socket.sockets.on('connection', function(socket) {
    socket.emit('xxx', {options});
 
    socket.on('xxx', function(data) {
        // do someting
    });
});

深信不疑掌握websocket的同室不容许不晓得socket.io,因为socket.io太著名了,也很棒,它本人对过期、握手等都做了拍卖。笔者估量那也是贯彻websocket使用最多的点子。socket.io最最最美好的一些就是温婉降级,当浏览器不扶植websocket时,它会在里面高雅降级为长轮询等,客商和开荒者是无需关怀具体完成的,很方便。

唯独事情是有两面性的,socket.io因为它的周密也推动了坑的地点,最要紧的就是臃肿,它的卷入也给多少推动了比较多的简报冗余,而且高贵降级这一独到之处,也伴随浏览器规范化的开展稳步失去了远大

Chrome Supported in version 4+
Firefox Supported in version 4+
Internet Explorer Supported in version 10+
Opera Supported in version 10+
Safari Supported in version 5+

在此边不是指摘说socket.io不佳,已经被淘汰了,而是有的时候候大家也得以思虑部分别样的贯彻~

 

2、http模块

凑巧说了socket.io臃肿,那今后就来说说便捷的,首先demo

JavaScript

var http = require(‘http’); var server = http.createServer(); server.on(‘upgrade’, function(req) { console.log(req.headers); }); server.listen(8888);

1
2
3
4
5
6
var http = require(‘http’);
var server = http.createServer();
server.on(‘upgrade’, function(req) {
console.log(req.headers);
});
server.listen(8888);

很轻便的兑现,其实socket.io内部对websocket也是如此完成的,但是后边帮大家封装了有的handle管理,这里我们也能够和睦去丰硕,给出两张socket.io中的源码图

图片 1

图片 2

 

3、ws模块

末尾有个例子会用到,这里就提一下,前边具体看~

 

二、本人完成一套server端websocket

刚好说了二种普及的websocket完毕形式,今后大家想想,对于开拓者来讲

websocket绝对于古板http数据交互情势以来,扩充了服务器推送的风浪,顾客端接收到事件再开展对应管理,开垦起来差别并不是太大啊

这是因为那么些模块已经帮大家将数据帧剖判此间的坑都填好了,第二片段大家将尝试本身创造一套简便的服务器端websocket模块

感激次碳酸钴的商量辅助,本身在这里边那有些只是简短说下,假设对此风乐趣好奇的请百度【web技能商讨所】

友善成功服务器端websocket首要有两点,贰个是利用net模块接受数据流,还应该有贰个是比照官方的帧结构图深入分析数据,实现这两片段就早就做到了方方面面包车型大巴最底层职业

率先给贰个客商端发送websocket握手报文的抓包内容

顾客端代码极粗略

JavaScript

ws = new WebSocket("ws://127.0.0.1:8888");

1
ws = new WebSocket("ws://127.0.0.1:8888");

图片 3

劳务器端要指向这几个key验证,就是讲key加上四个特定的字符串后做一回sha1运算,将其结果转变为base64送回来

JavaScript

var crypto = require('crypto'); var WS = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11'; require('net').createServer(function(o) { var key; o.on('data',function(e) { if(!key) { // 获取发送过来的KEY key = e.toString().match(/Sec-WebSocket-Key: (.+)/)[1]; // 连接上WS这一个字符串,并做二遍sha1运算,最终调换到Base64 key = crypto.createHash('sha1').update(key+WS).digest('base64'); // 输出重临给客商端的数量,那一个字段都以必得的 o.write('HTTP/1.1 101 Switching Protocolsrn'); o.write('Upgrade: websocketrn'); o.write('Connection: Upgradern'); // 那些字段带上服务器管理后的KEY o.write('Sec-WebSocket-Accept: '+key+'rn'); // 输出空行,使HTTP头截至 o.write('rn'); } }); }).listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var crypto = require('crypto');
var WS = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11';
 
require('net').createServer(function(o) {
var key;
o.on('data',function(e) {
if(!key) {
// 获取发送过来的KEY
key = e.toString().match(/Sec-WebSocket-Key: (.+)/)[1];
// 连接上WS这个字符串,并做一次sha1运算,最后转换成Base64
key = crypto.createHash('sha1').update(key+WS).digest('base64');
// 输出返回给客户端的数据,这些字段都是必须的
o.write('HTTP/1.1 101 Switching Protocolsrn');
o.write('Upgrade: websocketrn');
o.write('Connection: Upgradern');
// 这个字段带上服务器处理后的KEY
o.write('Sec-WebSocket-Accept: '+key+'rn');
// 输出空行,使HTTP头结束
o.write('rn');
}
});
}).listen(8888);

诸有此类握手部分就早已做到了,后边就是多少帧剖析与调换的活了

先看下官方提供的帧结构暗暗提示图

图片 4

简短介绍下

FIN为是或不是得了的标示

SportageSV为留下空间,0

opcode标志数据类型,是不是分片,是或不是二进制深入分析,心跳包等等

交给一张opcode对应图

图片 5

MASK是还是不是接纳掩码

Payload len和后边extend payload length表示数据长度,那一个是最辛劳的

PayloadLen唯有7位,换到无符号整型的话独有0到127的取值,这么小的数值当然无法描述十分大的多少,由此鲜明当数码长度小于或等于125时候它才作为数据长度的陈说,借使这么些值为126,则时候背后的多个字节来积攒数据长度,假诺为127则用后边八个字节来囤积数据长度

Masking-key掩码

上面贴出解析数据帧的代码

JavaScript

function decodeDataFrame(e) { var i = 0, j,s, frame = { FIN: e[i] >> 7, Opcode: e[i++] & 15, Mask: e[i] >> 7, PayloadLength: e[i++] & 0x7F }; if(frame.PayloadLength === 126) { frame.PayloadLength = (e[i++] << 8) + e[i++]; } if(frame.PayloadLength === 127) { i += 4; frame.PayloadLength = (e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8)

  • e[i++]; } if(frame.Mask) { frame.MaskingKey = [e[i++], e[i++], e[i++], e[i++]]; for(j = 0, s = []; j < frame.PayloadLength; j++) { s.push(e[i+j] ^ frame.MaskingKey[j%4]); } } else { s = e.slice(i, i+frame.PayloadLength); } s = new Buffer(s); if(frame.Opcode === 1) { s = s.toString(); } frame.PayloadData = s; return frame; }
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function decodeDataFrame(e) {
var i = 0,
j,s,
frame = {
FIN: e[i] >> 7,
Opcode: e[i++] & 15,
Mask: e[i] >> 7,
PayloadLength: e[i++] & 0x7F
};
 
if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i++] << 8) + e[i++];
}
 
if(frame.PayloadLength === 127) {
i += 4;
frame.PayloadLength = (e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8) + e[i++];
}
 
if(frame.Mask) {
frame.MaskingKey = [e[i++], e[i++], e[i++], e[i++]];
 
for(j = 0, s = []; j < frame.PayloadLength; j++) {
s.push(e[i+j] ^ frame.MaskingKey[j%4]);
}
} else {
s = e.slice(i, i+frame.PayloadLength);
}
 
s = new Buffer(s);
 
if(frame.Opcode === 1) {
s = s.toString();
}
 
frame.PayloadData = s;
return frame;
}

下一场是浮动数据帧的

JavaScript

function encodeDataFrame(e) { var s = [], o = new Buffer(e.PayloadData), l = o.length; s.push((e.FIN << 7) + e.Opcode); if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF); } return Buffer.concat([new Buffer(s), o]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function encodeDataFrame(e) {
var s = [],
o = new Buffer(e.PayloadData),
l = o.length;
 
s.push((e.FIN << 7) + e.Opcode);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), o]);
}

都以遵守帧结构暗暗提示图上的去管理,在此边不细讲,小说首要在下一些,假若对那块感兴趣的话能够移动web技巧研商所~

 

三、websocket传输图片和websocket语音聊天室

正片环节到了,那篇文章最要害的或许显示一下websocket的局地应用景况

1、传输图片

大家先思量传输图片的步调是哪些,首先服务器收到到客商端央求,然后读取图片文件,将二进制数据转载给客户端,客户端如何处理?当然是使用File里德r对象了

先给顾客端代码

JavaScript

var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888"); ws.onopen = function(){ console.log("握手成功"); }; ws.onmessage = function(e) { var reader = new FileReader(); reader.onload = function(event) { var contents = event.target.result; var a = new Image(); a.src = contents; document.body.appendChild(a); } reader.readAsDataU逍客L(e.data); };

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888");
 
ws.onopen = function(){
    console.log("握手成功");
};
 
ws.onmessage = function(e) {
    var reader = new FileReader();
    reader.onload = function(event) {
        var contents = event.target.result;
        var a = new Image();
        a.src = contents;
        document.body.appendChild(a);
    }
    reader.readAsDataURL(e.data);
};

接受到音讯,然后readAsDataUENCOREL,直接将图片base64增加到页面中

转到服务器端代码

JavaScript

fs.readdir("skyland", function(err, files) { if(err) { throw err; } for(var i = 0; i < files.length; i++) { fs.readFile('skyland/' + files[i], function(err, data) { if(err) { throw err; } o.write(encodeImgFrame(data)); }); } }); function encodeImgFrame(buf) { var s = [], l = buf.length, ret = []; s.push((1 << 7) + 2); if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF); } return Buffer.concat([new Buffer(s), buf]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
fs.readdir("skyland", function(err, files) {
if(err) {
throw err;
}
for(var i = 0; i < files.length; i++) {
fs.readFile('skyland/' + files[i], function(err, data) {
if(err) {
throw err;
}
 
o.write(encodeImgFrame(data));
});
}
});
 
function encodeImgFrame(buf) {
var s = [],
l = buf.length,
ret = [];
 
s.push((1 << 7) + 2);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), buf]);
}

注意s.push((1 << 7) + 2)这一句,这里非常直接把opcode写死了为2,对于Binary Frame,那样客商端接收到数码是不会尝试进行toString的,否则会报错~

代码非常的粗略,在那处向我们享受一下websocket传输图片的速度如何

测量检验非常多张图纸,总共8.24M

平常静态财富服务器需求20s左右(服务器较远)

cdn需要2.8s左右

那大家的websocket格局吧??!

答案是同等必要20s左右,是不是很失望……速度正是慢在传输上,并非服务器读取图片,本机上一样的图样财富,1s左右足以产生……那样看来数据流也敬谢不敏冲破间隔的限定进步传输速度

上面大家来探视websocket的另五个用法~

 

用websocket搭建语音聊天室

先来收拾一下口音聊天室的功力

顾客步入频道随后从迈克风输入音频,然后发送给后台转载给频道里面包车型客车别的人,其余人接收到消息实行广播

看起来困难在七个地点,第八个是音频的输入,第二是收到到数码流进行播放

先说音频的输入,这里运用了HTML5的getUserMedia方法,然而注意了,以此艺术上线是有泥涌的,最终说,先贴代码

JavaScript

if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true }, function (stream) { var rec = new SRecorder(stream); recorder = rec; }) }

1
2
3
4
5
6
7
8
if (navigator.getUserMedia) {
    navigator.getUserMedia(
        { audio: true },
        function (stream) {
            var rec = new SRecorder(stream);
            recorder = rec;
        })
}

首先个参数是{audio: true},只启用音频,然后创立了八个SRecorder对象,后续的操作基本上都在这里个指标上海展览中心开。此时就算代码运营在本土的话浏览器应该升迁您是还是不是启用迈克风输入,鲜明今后就开发银行了

接下去我们看下SRecorder构造函数是什么,给出主要的部分

JavaScript

var SRecorder = function(stream) { …… var context = new AudioContext(); var audioInput = context.createMediaStreamSource(stream); var recorder = context.createScriptProcessor(4096, 1, 1); …… }

1
2
3
4
5
6
7
var SRecorder = function(stream) {
    ……
   var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
    ……
}

奥迪(Audi)oContext是三个节奏上下文对象,有做过声音过滤管理的同窗应该理解“一段音频达到扬声器举办广播以前,半路对其举办阻拦,于是大家就赢得了点子数据了,那么些拦截职业是由window.奥迪(Audi)oContext来做的,咱们具有对旋律的操作都基于那些指标”,大家能够通过AudioContext创制分裂的奥迪(Audi)oNode节点,然后增添滤镜播放非常的响动

录音原理同样,大家也急需走奥迪(Audi)oContext,可是多了一步对迈克风音频输入的吸收接纳上,而不是像在此以前管理音频一下用ajax央求音频的ArrayBuffer对象再decode,迈克风的承受须求用到createMediaStreamSource方法,注意这一个参数就是getUserMedia方法首个参数的参数

况兼createScriptProcessor方法,它官方的解说是:

Creates a ScriptProcessorNode, which can be used for direct audio processing via JavaScript.

——————

席卷下就是其一艺术是运用JavaScript去处理音频采撷操作

归根结底到点子收罗了!胜利就在后面!

接下去让大家把话筒的输入和拍子搜罗相连起来

JavaScript

audioInput.connect(recorder); recorder.connect(context.destination);

1
2
audioInput.connect(recorder);
recorder.connect(context.destination);

context.destination官方表达如下

The destination property of the AudioContext interface returns an AudioDestinationNoderepresenting the final destination of all audio in the context.

——————

context.destination重回代表在情况中的音频的结尾目标地。

好,到了那儿,大家还索要两个监听音频搜罗的平地风波

JavaScript

recorder.onaudioprocess = function (e) { audioData.input(e.inputBuffer.getChannelData(0)); }

1
2
3
recorder.onaudioprocess = function (e) {
    audioData.input(e.inputBuffer.getChannelData(0));
}

audioData是多个对象,那些是在互联网找的,小编就加了贰个clear方法因为背后会用到,首要有十三分encodeWAV方法相当的赞,别人进行了多次的旋律压缩和优化,那个最终会伴随完整的代码一同贴出来

那时整整顾客步入频道随后从迈克风输入音频环节就早就到位啦,上边就该是向服务器端发送音频流,稍微有一点点蛋疼的来了,刚才大家说了,websocket通过opcode分裂能够代表回去的多寡是文件如故二进制数据,而大家onaudioprocess中input进去的是数组,最后播放音响供给的是Blob,{type: ‘audio/wav’}的靶子,那样咱们就亟须求在发送在此之前将数组转变来WAV的Blob,此时就用到了地点说的encodeWAV方法

服务器就好像很轻巧,只要转载就行了

当地质衡量试确实能够,唯独天坑来了!将顺序跑在服务器上时候调用getUserMedia方法提醒笔者无法不在三个安全的情况,相当于内需https,那意味ws也必得换来wss……因而服务器代码就一贯不选用大家友好包装的抓手、分析和编码了,代码如下

JavaScript

var https = require('https'); var fs = require('fs'); var ws = require('ws'); var userMap = Object.create(null); var options = { key: fs.readFileSync('./privatekey.pem'), cert: fs.readFileSync('./certificate.pem') }; var server = https.createServer(options, function(req, res) { res.writeHead({ 'Content-Type' : 'text/html' }); fs.readFile('./testaudio.html', function(err, data) { if(err) { return ; } res.end(data); }); }); var wss = new ws.Server({server: server}); wss.on('connection', function(o) { o.on('message', function(message) { if(message.indexOf('user') === 0) { var user = message.split(':')[1]; userMap[user] = o; } else { for(var u in userMap) { userMap[u].send(message); } } }); }); server.listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
var https = require('https');
var fs = require('fs');
var ws = require('ws');
var userMap = Object.create(null);
var options = {
    key: fs.readFileSync('./privatekey.pem'),
    cert: fs.readFileSync('./certificate.pem')
};
var server = https.createServer(options, function(req, res) {
    res.writeHead({
        'Content-Type' : 'text/html'
    });
 
    fs.readFile('./testaudio.html', function(err, data) {
        if(err) {
            return ;
        }
 
        res.end(data);
    });
});
 
var wss = new ws.Server({server: server});
 
wss.on('connection', function(o) {
    o.on('message', function(message) {
if(message.indexOf('user') === 0) {
    var user = message.split(':')[1];
    userMap[user] = o;
} else {
    for(var u in userMap) {
userMap[u].send(message);
    }
}
    });
});
 
server.listen(8888);

代码照旧很简短的,使用https模块,然后用了开班说的ws模块,userMap是模仿的频段,只兑现转载的大旨成效

行使ws模块是因为它极其https完毕wss实在是太方便了,和逻辑代码0冲突

https的搭建在那处就不提了,主如若要求私钥、CS路虎极光证书具名和表明文件,感兴趣的同桌能够了然下(然而不领悟的话在现网碰着也用持续getUserMedia……)

下边是完全的前端代码

JavaScript

var a = document.getElementById('a'); var b = document.getElementById('b'); var c = document.getElementById('c'); navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia; var gRecorder = null; var audio = document.querySelector('audio'); var door = false; var ws = null; b.onclick = function() { if(a.value === '') { alert('请输入客户名'); return false; } if(!navigator.getUserMedia) { alert('抱歉您的设备无马耳他语音聊天'); return false; } SRecorder.get(function (rec) { gRecorder = rec; }); ws = new WebSocket("wss://x.x.x.x:8888"); ws.onopen = function() { console.log('握手成功'); ws.send('user:' + a.value); }; ws.onmessage = function(e) { receive(e.data); }; document.onkeydown = function(e) { if(e.keyCode === 65) { if(!door) { gRecorder.start(); door = true; } } }; document.onkeyup = function(e) { if(e.keyCode === 65) { if(door) { ws.send(gRecorder.getBlob()); gRecorder.clear(); gRecorder.stop(); door = false; } } } } c.onclick = function() { if(ws) { ws.close(); } } var SRecorder = function(stream) { config = {}; config.sampleBits = config.smapleBits || 8; config.sampleRate = config.sampleRate || (44100 / 6); var context = new 奥迪oContext(); var audioInput = context.createMediaStreamSource(stream); var recorder = context.createScriptProcessor(4096, 1, 1); var audioData = { size: 0 //录音文件长度 , buffer: [] //录音缓存 , inputSampleRate: context.sampleRate //输入采集样品率 , inputSampleBits: 16 //输入采集样品数位 8, 16 , outputSampleRate: config.sampleRate //输出采集样品率 , oututSampleBits: config.sampleBits //输出采集样品数位 8, 16 , clear: function() { this.buffer = []; this.size = 0; } , input: function (data) { this.buffer.push(new Float32Array(data)); this.size += data.length; } , compress: function () { //合併压缩 //合并 var data = new Float32Array(this.size); var offset = 0; for (var i = 0; i < this.buffer.length; i++) { data.set(this.buffer[i], offset); offset += this.buffer[i].length; } //压缩 var compression = parseInt(this.inputSampleRate / this.outputSampleRate); var length = data.length / compression; var result = new Float32Array(length); var index = 0, j = 0; while (index < length) { result[index] = data[j]; j += compression; index++; } return result; } , encodeWAV: function () { var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate); var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits); var bytes = this.compress(); var dataLength = bytes.length * (sampleBits / 8); var buffer = new ArrayBuffer(44 + dataLength); var data = new DataView(buffer); var channelCount = 1;//单声道 var offset = 0; var writeString = function (str) { for (var i = 0; i < str.length; i++) { data.setUint8(offset + i, str.charCodeAt(i)); } }; // 财富沟通文件标志符 writeString('WranglerIFF'); offset += 4; // 下个地点起首到文件尾总字节数,即文件大小-8 data.setUint32(offset, 36 + dataLength, true); offset += 4; // WAV文件注解 writeString('WAVE'); offset += 4; // 波形格式标识 writeString('fmt '); offset += 4; // 过滤字节,平时为 0x10 = 16 data.setUint32(offset, 16, true); offset += 4; // 格式种类 (PCM情势采样数据) data.setUint16(offset, 1, true); offset += 2; // 通道数 data.setUint16(offset, channelCount, true); offset += 2; // 采集样品率,每秒样本数,表示每种通道的广播速度 data.setUint32(offset, sampleRate, true); offset += 4; // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8 data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset += 4; // 快数据调治数 采集样品一回占用字节数 单声道×每样本的数额位数/8 data.setUint16(offset, channelCount * (sampleBits / 8), true); offset += 2; // 每样本数量位数 data.setUint16(offset, sampleBits, true); offset += 2; // 数据标记符 writeString('data'); offset += 4; // 采样数据总的数量,即数据总大小-44 data.setUint32(offset, dataLength, true); offset += 4; // 写入采集样品数据 if (sampleBits === 8) { for (var i = 0; i < bytes.length; i++, offset++) { var s = Math.max(-1, Math.min(1, bytes[i])); var val = s < 0 ? s * 0x8000 : s * 0x7FFF; val = parseInt(255 / (65535 / (val + 32768))); data.setInt8(offset, val, true); } } else { for (var i = 0; i < bytes.length; i++, offset += 2) { var s = Math.max(-1, Math.min(1, bytes[i])); data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true); } } return new Blob([data], { type: 'audio/wav' }); } }; this.start = function () { audioInput.connect(recorder); recorder.connect(context.destination); } this.stop = function () { recorder.disconnect(); } this.getBlob = function () { return audioData.encodeWAV(); } this.clear = function() { audioData.clear(); } recorder.onaudioprocess = function (e) { audioData.input(e.inputBuffer.getChannelData(0)); } }; SRecorder.get = function (callback) { if (callback) { if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true }, function (stream) { var rec = new SRecorder(stream); callback(rec); }) } } } function receive(e) { audio.src = window.URL.createObjectURL(e); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
var a = document.getElementById('a');
var b = document.getElementById('b');
var c = document.getElementById('c');
 
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia;
 
var gRecorder = null;
var audio = document.querySelector('audio');
var door = false;
var ws = null;
 
b.onclick = function() {
    if(a.value === '') {
        alert('请输入用户名');
        return false;
    }
    if(!navigator.getUserMedia) {
        alert('抱歉您的设备无法语音聊天');
        return false;
    }
 
    SRecorder.get(function (rec) {
        gRecorder = rec;
    });
 
    ws = new WebSocket("wss://x.x.x.x:8888");
 
    ws.onopen = function() {
        console.log('握手成功');
        ws.send('user:' + a.value);
    };
 
    ws.onmessage = function(e) {
        receive(e.data);
    };
 
    document.onkeydown = function(e) {
        if(e.keyCode === 65) {
            if(!door) {
                gRecorder.start();
                door = true;
            }
        }
    };
 
    document.onkeyup = function(e) {
        if(e.keyCode === 65) {
            if(door) {
                ws.send(gRecorder.getBlob());
                gRecorder.clear();
                gRecorder.stop();
                door = false;
            }
        }
    }
}
 
c.onclick = function() {
    if(ws) {
        ws.close();
    }
}
 
var SRecorder = function(stream) {
    config = {};
 
    config.sampleBits = config.smapleBits || 8;
    config.sampleRate = config.sampleRate || (44100 / 6);
 
    var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
 
    var audioData = {
        size: 0          //录音文件长度
        , buffer: []     //录音缓存
        , inputSampleRate: context.sampleRate    //输入采样率
        , inputSampleBits: 16       //输入采样数位 8, 16
        , outputSampleRate: config.sampleRate    //输出采样率
        , oututSampleBits: config.sampleBits       //输出采样数位 8, 16
        , clear: function() {
            this.buffer = [];
            this.size = 0;
        }
        , input: function (data) {
            this.buffer.push(new Float32Array(data));
            this.size += data.length;
        }
        , compress: function () { //合并压缩
            //合并
            var data = new Float32Array(this.size);
            var offset = 0;
            for (var i = 0; i < this.buffer.length; i++) {
                data.set(this.buffer[i], offset);
                offset += this.buffer[i].length;
            }
            //压缩
            var compression = parseInt(this.inputSampleRate / this.outputSampleRate);
            var length = data.length / compression;
            var result = new Float32Array(length);
            var index = 0, j = 0;
            while (index < length) {
                result[index] = data[j];
                j += compression;
                index++;
            }
            return result;
        }
        , encodeWAV: function () {
            var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate);
            var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits);
            var bytes = this.compress();
            var dataLength = bytes.length * (sampleBits / 8);
            var buffer = new ArrayBuffer(44 + dataLength);
            var data = new DataView(buffer);
 
            var channelCount = 1;//单声道
            var offset = 0;
 
            var writeString = function (str) {
                for (var i = 0; i < str.length; i++) {
                    data.setUint8(offset + i, str.charCodeAt(i));
                }
            };
 
            // 资源交换文件标识符
            writeString('RIFF'); offset += 4;
            // 下个地址开始到文件尾总字节数,即文件大小-8
            data.setUint32(offset, 36 + dataLength, true); offset += 4;
            // WAV文件标志
            writeString('WAVE'); offset += 4;
            // 波形格式标志
            writeString('fmt '); offset += 4;
            // 过滤字节,一般为 0x10 = 16
            data.setUint32(offset, 16, true); offset += 4;
            // 格式类别 (PCM形式采样数据)
            data.setUint16(offset, 1, true); offset += 2;
            // 通道数
            data.setUint16(offset, channelCount, true); offset += 2;
            // 采样率,每秒样本数,表示每个通道的播放速度
            data.setUint32(offset, sampleRate, true); offset += 4;
            // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8
            data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset += 4;
            // 快数据调整数 采样一次占用字节数 单声道×每样本的数据位数/8
            data.setUint16(offset, channelCount * (sampleBits / 8), true); offset += 2;
            // 每样本数据位数
            data.setUint16(offset, sampleBits, true); offset += 2;
            // 数据标识符
            writeString('data'); offset += 4;
            // 采样数据总数,即数据总大小-44
            data.setUint32(offset, dataLength, true); offset += 4;
            // 写入采样数据
            if (sampleBits === 8) {
                for (var i = 0; i < bytes.length; i++, offset++) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    var val = s < 0 ? s * 0x8000 : s * 0x7FFF;
                    val = parseInt(255 / (65535 / (val + 32768)));
                    data.setInt8(offset, val, true);
                }
            } else {
                for (var i = 0; i < bytes.length; i++, offset += 2) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true);
                }
            }
 
            return new Blob([data], { type: 'audio/wav' });
        }
    };
 
    this.start = function () {
        audioInput.connect(recorder);
        recorder.connect(context.destination);
    }
 
    this.stop = function () {
        recorder.disconnect();
    }
 
    this.getBlob = function () {
        return audioData.encodeWAV();
    }
 
    this.clear = function() {
        audioData.clear();
    }
 
    recorder.onaudioprocess = function (e) {
        audioData.input(e.inputBuffer.getChannelData(0));
    }
};
 
SRecorder.get = function (callback) {
    if (callback) {
        if (navigator.getUserMedia) {
            navigator.getUserMedia(
                { audio: true },
                function (stream) {
                    var rec = new SRecorder(stream);
                    callback(rec);
                })
        }
    }
}
 
function receive(e) {
    audio.src = window.URL.createObjectURL(e);
}

注意:按住a键说话,放开a键发送

友善有品味不开关实时对讲,通过setInterval发送,但意识杂音有一些重,效果倒霉,那些必要encodeWAV再一层的卷入,多去除景况杂音的效应,本身采纳了尤其便利的按钮说话的情势

 

那篇小说里首先展望了websocket的前途,然后依据规范我们团结尝尝深入分析和扭转数据帧,对websocket有了越来越深一步的精晓

最终通过七个demo看见了websocket的潜能,关于语音聊天室的demo涉及的较广,未有接触过奥迪oContext对象的校友最佳先精晓下奥迪(Audi)oContext

文章到此处就终止啦~有哪些主张和难题款待大家提议来一齐批评探究~

 

1 赞 11 收藏 3 评论

图片 6

编辑:计算机论坛 本文来源:如果陌生的话也没关系

关键词: