接下来, 笔者将根据流程图,对除了newrequest以外的函数进行逐步的展开和分析
(*client).do(*client).do方法的核心代码是一个没有结束条件的for循环。
for {// for all but the first request, create the next// request hop and replace req.if len(reqs) > 0 { loc := resp.header.get("location")// ...此处省略代码... err = c.checkredirect(req, reqs)// ...此处省略很多代码... } reqs = append(reqs, req)var err errorvar didtimeout func() boolif resp, didtimeout, err = c.send(req, deadline); err != nil {// c.send() always closes req.body reqbodyclosed = true// ...此处省略代码...return nil, uerr(err) }var shouldredirect bool redirectmethod, shouldredirect, includebody = redirectbehavior(req.method, resp, reqs[0])if !shouldredirect {return resp, nil } req.closebody()}
上面的代码中, 请求第一次进入会调用c.send, 得到响应后会判断请求是否需要重定向, 如果需要重定向则继续循环, 否则返回响应。
进入重定向流程后, 这里笔者简单介绍一下checkredirect函数:
func defaultcheckredirect(req *request, via []*request) error {if len(via) >= 10 {return errors.new("stopped after 10 redirects") }return nil}// ...func (c *client) checkredirect(req *request, via []*request) error { fn := c.checkredirectif fn == nil { fn = defaultcheckredirect }return fn(req, via)}
由上可知, 用户可以自己定义重定向的检查规则。如果用户没有自定义检查规则, 则重定向次数不能超过10次。
(*client).send(*client).send方法逻辑较为简单, 主要看用户有没有为http.client的jar字段实现cookiejar接口。主要流程如下:
如果实现了cookiejar接口, 为request添加保存的cookie信息。
调用send函数。
如果实现了cookiejar接口, 将response中的cookie信息保存下来。
// didtimeout is non-nil only if err != nil.func (c *client) send(req *request, deadline time.time) (resp *response, didtimeout func() bool, err error) {if c.jar != nil {for _, cookie := range c.jar.cookies(req.url) { req.addcookie(cookie) } } resp, didtimeout, err = send(req, c.transport(), deadline)if err != nil {return nil, didtimeout, err }if c.jar != nil {if rc := resp.cookies(); len(rc) > 0 { c.jar.setcookies(req.url, rc) } }return resp, nil, nil}
另外, 我们还需要关注c.transport()的调用。如果用户未对http.client指定transport则会使用go默认的defaulttransport。
该transport实现roundtripper接口。在go中roundtripper的定义为“执行单个http事务的能力,获取给定请求的响应”。
func (c *client) transport() roundtripper {if c.transport != nil {return c.transport }return defaulttransport}
sendsend函数会检查request的url,以及参数的rt, 和header值。如果url和rt为nil则直接返回错误。同时, 如果请求中设置了用户信息, 还会检查并设置basic的验证头信息,最后调用rt.roundtrip得到请求的响应。
func send(ireq *request, rt roundtripper, deadline time.time) (resp *response, didtimeout func() bool, err error) { req := ireq // req is either the original request, or a modified fork// ...此处省略代码...if u := req.url.user; u != nil && req.header.get("authorization") == "" { username := u.username() password, _ := u.password() forkreq() req.header = cloneormakeheader(ireq.header) req.header.set("authorization", "basic "+basicauth(username, password)) }if !deadline.iszero() { forkreq() } stoptimer, didtimeout := setrequestcancel(req, rt, deadline) resp, err = rt.roundtrip(req)if err != nil {// ...此处省略代码...return nil, didtimeout, err }// ...此处省略代码...return resp, nil, nil}
(*transport).roundtrip(*transport).roundtrip的逻辑很简单,它会调用(*transport).roundtrip方法,因此本节实际上是对(*transport).roundtrip方法的分析。
func (t *transport) roundtrip(req *request) (*response, error) {return t.roundtrip(req)}func (t *transport) roundtrip(req *request) (*response, error) {// ...此处省略校验header头和headervalue的代码以及其他代码...for {select {case <-ctx.done(): req.closebody()return nil, ctx.err()default: }// treq gets modified by roundtrip, so we need to recreate for each retry. treq := &transportrequest{request: req, trace: trace} cm, err := t.connectmethodforrequest(treq)// ...此处省略代码... pconn, err := t.getconn(treq, cm)if err != nil { t.setreqcanceler(req, nil) req.closebody()return nil, err }var resp *responseif pconn.alt != nil {// http/2 path. t.setreqcanceler(req, nil) // not cancelable with cancelrequest resp, err = pconn.alt.roundtrip(req) } else { resp, err = pconn.roundtrip(treq) }if err == nil {return resp, nil }// ...此处省略判断是否重试请求的代码逻辑... }}
由上可知, 每次for循环, 会判断请求上下文是否已经取消, 如果没有取消则继续进行后续的流程。
先调用t.getconn方法获取一个persistconn。
因为本篇主旨是http1.1,所以我们直接看http1.1的执行分支。根据源码中的注释和实际的debug结果,获取到连接后, 会继续调用pconn.roundtrip。
(*transport).getconn笔者认为这一步在http请求中是非常核心的一个步骤,因为只有和server端建立连接后才能进行后续的通信。
func (t *transport) getconn(treq *transportrequest, cm connectmethod) (pc *persistconn, err error) { req := treq.request trace := treq.trace ctx := req.context()// ...此处省略代码... w := &wantconn{ cm: cm, key: cm.key(), ctx: ctx, ready: make(chan struct{}, 1), beforedial: testhookprependingdial, afterdial: testhookpostpendingdial, }// ...此处省略代码...// queue for idle connection.if delivered := t.queueforidleconn(w); delivered { pc := w.pc// ...此处省略代码...return pc, nil } cancelc := make(chan error, 1) t.setreqcanceler(req, func(err error) { cancelc <- err })// queue for permission to dial. t.queuefordial(w)// wait for completion or cancellation.select {case <-w.ready:// trace success but only for http/1.// http/2 calls trace.gotconn itself.if w.pc != nil && w.pc.alt == nil && trace != nil && trace.gotconn != nil { trace.gotconn(httptrace.gotconninfo{conn: w.pc.conn, reused: w.pc.isreused()}) }// ...此处省略代码...return w.pc, w.errcase <-req.cancel:return nil, errrequestcanceledconncase <-req.context().done():return nil, req.context().err()case err := <-cancelc:if err == errrequestcanceled { err = errrequestcanceledconn }return nil, err }}
由上能够清楚的知道, 获取连接分为以下几个步骤:
调用t.queueforidleconn获取一个空闲且可复用的连接,如果获取成功则直接返回该连接。
如果未获取到空闲连接则调用t.queuefordial开始新建一个连接。
等待w.ready关闭,则可以返回新的连接。
(*transport).queueforidleconn(*transport).queueforidleconn方法会根据请求的connectmethodkey从t.idleconn获取一个[]*persistconn切片, 并从切片中,根据算法获取一个有效的空闲连接。如果未获取到空闲连接,则将wantconn结构体变量放入t.idleconnwait[w.key]等待队列,此处wantconn结构体变量就是前面提到的w。
connectmethodkey定义和queueforidleconn部分关键代码如下:
type connectmethodkey struct { proxy, scheme, addr string onlyh1 bool}func (t *transport) queueforidleconn(w *wantconn) (delivered bool) {// ...此处省略代码...// look for most recently-used idle connection.if list, ok := t.idleconn[w.key]; ok { stop := false delivered := falsefor len(list) > 0 && !stop { pconn := list[len(list)-1]// see whether this connection has been idle too long, considering// only the wall time (the round(0)), in case this is a laptop or vm// coming out of suspend with previously cached idle connections. tooold := !oldtime.iszero() && pconn.idleat.round(0).before(oldtime)// ...此处省略代码... delivered = w.trydeliver(pconn, nil)if delivered {// ...此处省略代码... } stop = true }if len(list) > 0 { t.idleconn[w.key] = list } else {delete(t.idleconn, w.key) }if stop {return delivered } }// register to receive next connection that becomes idle.if t.idleconnwait == nil { t.idleconnwait = make(map[connectmethodkey]wantconnqueue) } q := t.idleconnwait[w.key] q.cleanfront() q.pushback(w) t.idleconnwait[w.key] = qreturn false}
其中w.trydeliver方法主要作用是将连接协程安全的赋值给w.pc,并关闭w.ready管道。此时我们便可以和(*transport).getconn中调用queueforidleconn成功后的返回值对应上。
(*transport).queuefordial(*transport).queuefordial方法包含三个步骤:
如果t.maxconnsperhost小于等于0,执行go t.dialconnfor(w)并返回。其中maxconnsperhost代表着每个host的最大连接数,小于等于0表示不限制。
如果当前host的连接数不超过t.maxconnsperhost,对当前host的连接数+1,然后执行go t.dialconnfor(w)并返回。
如果当前host的连接数等于t.maxconnsperhost,则将wantconn结构体变量放入t.connsperhostwait[w.key]等待队列,此处wantconn结构体变量就是前面提到的w。另外在放入等待队列前会先清除队列中已经失效或者不再等待的变量。
func (t *transport) queuefordial(w *wantconn) { w.beforedial()if t.maxconnsperhost <= 0 {go t.dialconnfor(w)return } t.connsperhostmu.lock()defer t.connsperhostmu.unlock()if n := t.connsperhost[w.key]; n < t.maxconnsperhost {if t.connsperhost == nil { t.connsperhost = make(map[connectmethodkey]int) } t.connsperhost[w.key] = n + 1go t.dialconnfor(w)return }if t.connsperhostwait == nil { t.connsperhostwait = make(map[connectmethodkey]wantconnqueue) } q := t.connsperhostwait[w.key] q.cleanfront() q.pushback(w) t.connsperhostwait[w.key] = q}
(*transport).dialconnfor(*transport).dialconnfor方法调用t.dialconn获取一个真正的*persistconn。并将这个连接传递给w, 如果w已经获取到了连接,则会传递失败,此时调用t.putorcloseidleconn将连接放回空闲连接池。
如果连接获取错误则会调用t.decconnsperhost减少当前host的连接数。
func (t *transport) dialconnfor(w *wantconn) {defer w.afterdial() pc, err := t.dialconn(w.ctx, w.cm) delivered := w.trydeliver(pc, err)if err == nil && (!delivered || pc.alt != nil) {// pconn was not passed to w,// or it is http/2 and can be shared.// add to the idle connection pool. t.putorcloseidleconn(pc) }if err != nil { t.decconnsperhost(w.key) }}
(*transport).putorcloseidleconn方法
func (t *transport) putorcloseidleconn(pconn *persistconn) {if err := t.tryputidleconn(pconn); err != nil { pconn.close(err) }}func (t *transport) tryputidleconn(pconn *persistconn) error {if t.disablekeepalives || t.maxidleconnsperhost < 0 {return errkeepalivesdisabled }// ...此处省略代码... t.idlemu.lock()defer t.idlemu.unlock()// ...此处省略代码...// deliver pconn to goroutine waiting for idle connection, if any.// (they may be actively dialing, but this conn is ready first.// chrome calls this socket late binding.// see https://insouciant.org/tech/connection-management-in-chromium/.) key := pconn.cachekeyif q, ok := t.idleconnwait[key]; ok { done := falseif pconn.alt == nil {// http/1.// loop over the waiting list until we find a w that isn't done already, and hand it pconn.for q.len() > 0 { w := q.popfront()if w.trydeliver(pconn, nil) { done = truebreak } } } else {// http/2.// can hand the same pconn to everyone in the waiting list,// and we still won't be done: we want to put it in the idle// list unconditionally, for any future clients too.for q.len() > 0 { w := q.popfront() w.trydeliver(pconn, nil) } }if q.len() == 0 {delete(t.idleconnwait, key) } else { t.idleconnwait[key] = q }if done {return nil } }if t.closeidle {return errcloseidle }if t.idleconn == nil { t.idleconn = make(map[connectmethodkey][]*persistconn) } idles := t.idleconn[key]if len(idles) >= t.maxidleconnsperhost() {return errtoomanyidlehost }// ...此处省略代码... t.idleconn[key] = append(idles, pconn) t.idlelru.add(pconn)// ...此处省略代码...// set idle timer, but only for http/1 (pconn.alt == nil).// the http/2 implementation manages the idle timer itself// (see idleconntimeout in h2_bundle.go).if t.idleconntimeout > 0 && pconn.alt == nil {if pconn.idletimer != nil { pconn.idletimer.reset(t.idleconntimeout) } else { pconn.idletimer = time.afterfunc(t.idleconntimeout, pconn.closeconnifstillidle) } } pconn.idleat = time.now()return nil}func (t *transport) maxidleconnsperhost() int {if v := t.maxidleconnsperhost; v != 0 {return v }return defaultmaxidleconnsperhost // 2}
由上可知,将连接放入t.idleconn前,先检查t.idleconnwait的数量。如果有请求在等待空闲连接, 则将连接复用,没有空闲连接时,才将连接放入t.idleconn。连接放入t.idleconn后,还会重置连接的可空闲时间。
另外在t.putorcloseidleconn函数中还需要注意两点:
如果用户自定义了http.client,且将disablekeepalives设置为true,或者将maxidleconnsperhost设置为负数,则连接不会放入t.idleconn即连接不能复用。
在判断已有空闲连接数量时, 如果maxidleconnsperhost 不等于0, 则返回用户设置的数量,否则返回默认值2,详见上面的(*transport).maxidleconnsperhost 函数。
综上, 我们知道对于部分有连接数限制的业务, 我们可以为http.client自定义一个transport, 并设置transport的maxconnsperhost,maxidleconnsperhost,idleconntimeout和disablekeepalives从而达到即限制连接数量,又能保证一定的并发。
(*transport).decconnsperhost方法
func (t *transport) decconnsperhost(key connectmethodkey) {// ...此处省略代码... t.connsperhostmu.lock()defer t.connsperhostmu.unlock() n := t.connsperhost[key]// ...此处省略代码...// can we hand this count to a goroutine still waiting to dial?// (some goroutines on the wait list may have timed out or// gotten a connection another way. if they're all gone,// we don't want to kick off any spurious dial operations.)if q := t.connsperhostwait[key]; q.len() > 0 { done := falsefor q.len() > 0 { w := q.popfront()if w.waiting() {go t.dialconnfor(w) done = truebreak } }if q.len() == 0 {delete(t.connsperhostwait, key) } else {// q is a value (like a slice), so we have to store// the updated q back into the map. t.connsperhostwait[key] = q }if done {return } }// otherwise, decrement the recorded count.if n--; n == 0 {delete(t.connsperhost, key) } else { t.connsperhost[key] = n }}
由上可知, decconnsperhost方法主要干了两件事:
判断是否有请求在等待拨号, 如果有则执行go t.dialconnfor(w)。
如果没有请求在等待拨号, 则减少当前host的连接数量。
(*transport).dialconn根据http.client的默认配置和实际的debug结果,(*transport).dialconn方法主要逻辑如下:
调用t.dial(ctx, "tcp", cm.addr())创建tcp连接。
如果是https的请求, 则对请求建立安全的tls传输通道。
为persistconn创建读写buffer, 如果用户没有自定义读写buffer的大小, 根据writebuffersize和readbuffersize方法可知, 读写bufffer的大小默认为4096。
执行go pconn.readloop()和go pconn.writeloop()开启读写循环然后返回连接。
func (t *transport) dialconn(ctx context.context, cm connectmethod) (pconn *persistconn, err error) { pconn = &persistconn{ t: t, cachekey: cm.key(), reqch: make(chan requestandchan, 1), writech: make(chan writerequest, 1), closech: make(chan struct{}), writeerrch: make(chan error, 1), writeloopdone: make(chan struct{}), }// ...此处省略代码...if cm.scheme() == "https" && t.hascustomtlsdialer() {// ...此处省略代码... } else { conn, err := t.dial(ctx, "tcp", cm.addr())if err != nil {return nil, wraperr(err) } pconn.conn = connif cm.scheme() == "https" {var firsttlshost stringif firsttlshost, _, err = net.splithostport(cm.addr()); err != nil {return nil, wraperr(err) }if err = pconn.addtls(firsttlshost, trace); err != nil {return nil, wraperr(err) } } }// proxy setup.switch { // ...此处省略代码... }if cm.proxyurl != nil && cm.targetscheme == "https" {// ...此处省略代码... }if s := pconn.tlsstate; s != nil && s.negotiatedprotocolismutual && s.negotiatedprotocol != "" {// ...此处省略代码... } pconn.br = bufio.newreadersize(pconn, t.readbuffersize()) pconn.bw = bufio.newwritersize(persistconnwriter{pconn}, t.writebuffersize())go pconn.readloop()go pconn.writeloop()return pconn, nil}func (t *transport) writebuffersize() int {if t.writebuffersize > 0 {return t.writebuffersize }return 4 << 10}func (t *transport) readbuffersize() int {if t.readbuffersize > 0 {return t.readbuffersize }return 4 << 10}
(*persistconn).roundtrip(*persistconn).roundtrip方法是http1.1请求的核心之一,该方法在这里获取真实的response并返回给上层。
func (pc *persistconn) roundtrip(req *transportrequest) (resp *response, err error) {// ...此处省略代码... gone := make(chan struct{})defer close(gone)// ...此处省略代码...const debugroundtrip = false// write the request concurrently with waiting for a response,// in case the server decides to reply before reading our full// request body. startbyteswritten := pc.nwrite writeerrch := make(chan error, 1) pc.writech <- writerequest{req, writeerrch, continuech} resc := make(chan responseanderror) pc.reqch <- requestandchan{ req: req.request, ch: resc, addedgzip: requestedgzip, continuech: continuech, callergone: gone, }var respheadertimer <-chan time.time cancelchan := req.request.cancel ctxdonechan := req.context().done()for { testhookwaitresloop()select {case err := <-writeerrch:// ...此处省略代码...if err != nil { pc.close(fmt.errorf("write error: %v", err))return nil, pc.maproundtriperror(req, startbyteswritten, err) }// ...此处省略代码...case <-pc.closech:// ...此处省略代码...return nil, pc.maproundtriperror(req, startbyteswritten, pc.closed)case <-respheadertimer:// ...此处省略代码...return nil, errtimeoutcase re := <-resc:if (re.res == nil) == (re.err == nil) {panic(fmt.sprintf("internal error: exactly one of res or err should be set; nil=%v", re.res == nil)) }if debugroundtrip { req.logf("resc recv: %p, %t/%#v", re.res, re.err, re.err) }if re.err != nil {return nil, pc.maproundtriperror(req, startbyteswritten, re.err) }return re.res, nilcase <-cancelchan: pc.t.cancelrequest(req.request) cancelchan = nilcase <-ctxdonechan: pc.t.cancelrequest(req.request, req.context().err()) cancelchan = nil ctxdonechan = nil } }}
由上可知, (*persistconn).roundtrip方法可以分为三步:
向连接的writech写入writerequest: pc.writech <- writerequest{req, writeerrch, continuech}, 参考(*transport).dialconn可知pc.writech是一个缓冲大小为1的管道,所以会立马写入成功。
向连接的reqch写入requestandchan: pc.reqch <- requestandchan, pc.reqch和pc.writech一样都是缓冲大小为1的管道。其中requestandchan.ch是一个无缓冲的responseanderror管道,(*persistconn).roundtrip就通过这个管道读取到真实的响应。
开启for循环select, 等待响应或者超时等信息。
(*persistconn).writeloop 写循环
(*persistconn).writeloop方法主体逻辑相对简单,把用户的请求写入连接的写缓存buffer, 最后再flush就可以了。
func (pc *persistconn) writeloop() {defer close(pc.writeloopdone)for {select {case wr := <-pc.writech: startbyteswritten := pc.nwrite err := wr.req.request.write(pc.bw, pc.isproxy, wr.req.extra, pc.waitforcontinue(wr.continuech))if bre, ok := err.(requestbodyreaderror); ok { err = bre.error wr.req.seterror(err) }if err == nil { err = pc.bw.flush() }if err != nil { wr.req.request.closebody()if pc.nwrite == startbyteswritten { err = nothingwrittenerror{err} } } pc.writeerrch <- err // to the body reader, which might recycle us wr.ch <- err // to the roundtrip functionif err != nil { pc.close(err)return }case <-pc.closech:return } }}
(*persistconn).readloop 读循环
(*persistconn).readloop有较多的细节, 我们先看代码, 然后再逐步分析。
func (pc *persistconn) readloop() { closeerr := errreadloopexiting // default value, if not changed belowdefer func() { pc.close(closeerr) pc.t.removeidleconn(pc) }() tryputidleconn := func(trace *httptrace.clienttrace) bool {if err := pc.t.tryputidleconn(pc); err != nil {// ...此处省略代码... }// ...此处省略代码...return true }// ...此处省略代码... alive := truefor alive {// ...此处省略代码... rc := <-pc.reqch trace := httptrace.contextclienttrace(rc.req.context())var resp *responseif err == nil { resp, err = pc.readresponse(rc, trace) } else { err = transportreadfromservererror{err} closeerr = err }// ...此处省略代码... bodywritable := resp.bodyiswritable() hasbody := rc.req.method != "head" && resp.contentlength != 0if resp.close || rc.req.close || resp.statuscode <= 199 || bodywritable {// don't do keep-alive on error if either party requested a close// or we get an unexpected informational (1xx) response.// statuscode 100 is already handled above. alive = false }if !hasbody || bodywritable {// ...此处省略代码...continue } waitforbodyread := make(chan bool, 2) body := &bodyeofsignal{ body: resp.body, earlyclosefn: func() error { waitforbodyread <- false <-eofc // will be closed by deferred call at the end of the functionreturn nil }, fn: func(err error) error { iseof := err == io.eof waitforbodyread <- iseofif iseof { <-eofc // see comment above eofc declaration } else if err != nil {if cerr := pc.canceled(); cerr != nil {return cerr } }return err }, } resp.body = body// ...此处省略代码...select {case rc.ch <- responseanderror{res: resp}:case <-rc.callergone:return }// before looping back to the top of this function and peeking on// the bufio.reader, wait for the caller goroutine to finish// reading the response body. (or for cancellation or death)select {case bodyeof := <-waitforbodyread: pc.t.setreqcanceler(rc.req, nil) // before pc might return to idle pool alive = alive && bodyeof && !pc.saweof && pc.wroterequest() && tryputidleconn(trace)if bodyeof { eofc <- struct{}{} }case <-rc.req.cancel: alive = false pc.t.cancelrequest(rc.req)case <-rc.req.context().done(): alive = false pc.t.cancelrequest(rc.req, rc.req.context().err())case <-pc.closech: alive = false } testhookreadloopbeforenextread() }}
由上可知, 只要连接处于活跃状态, 则这个读循环会一直开启, 直到 连接不活跃或者产生其他错误才会结束读循环。
在上述源码中,pc.readresponse(rc,trace)会从连接的读buffer中获取一个请求对应的response。
读到响应之后判断请求是否是head请求或者响应内容为空,如果是head请求或者响应内容为空则将响应写入rc.ch,并将连接放入idleconn(此处因为篇幅的原因省略了源码内容, 正常请求的逻辑也有写响应和将连接放入idleconn两个步骤)。
如果不是head请求并且响应内容不为空即!hasbody || bodywritable为false:
创建一个缓冲大小为2的等待响应被读取的管道waitforbodyread: waitforbodyread := make(chan bool, 2)
将响应的body修改为bodyeofsignal结构体。通过上面的源码我们可以知道,此时的resp.body中有earlyclosefn和fn两个函数。earlyclosefn函数会向waitforbodyread管道写入false, fn函数会判断响应是否读完, 如果已经读完则向waitforbodyread写入true否则写入false。
将修改后的响应写入rc.ch。其中rc.ch从rc := <-pc.reqch获取,而pc.reqch正是前面(*persistconn).roundtrip函数写入的requestandchan。requestandchan.ch是一个无缓冲的responseanderror管道,(*persistconn).roundtrip通过这个管道读取到真实的响应。
select 读取 waitforbodyread被写入的值。如果读到到的是true则可以调用tryputidleconn(此方法会调用前面提到的(*transport).tryputidleconn方法)将连接放入idleconn从而复用连接。
waitforbodyread写入true的原因我们已经知道了,但是被写入true的时机我们尚不明确。
func (es *bodyeofsignal) read(p []byte) (n int, err error) {// ...此处省略代码... n, err = es.body.read(p)if err != nil { es.mu.lock()defer es.mu.unlock()if es.rerr == nil { es.rerr = err } err = es.condfn(err) }return}func (es *bodyeofsignal) close() error { es.mu.lock()defer es.mu.unlock()if es.closed {return nil } es.closed = trueif es.earlyclosefn != nil && es.rerr != io.eof {return es.earlyclosefn() } err := es.body.close()return es.condfn(err)}// caller must hold es.mu.func (es *bodyeofsignal) condfn(err error) error {if es.fn == nil {return err } err = es.fn(err) es.fn = nilreturn err}
由上述源码可知, 只有当调用方完整的读取了响应,该连接才能够被复用。因此在http1.1中,一个连接上的请求,只有等前一个请求处理完之后才能继续下一个请求。如果前面的请求处理较慢, 则后面的请求必须等待, 这就是http1.1中的线头阻塞。
根据上面的逻辑, 我们gopher在平时的开发中如果遇到了不关心响应的请求, 也一定要记得把响应body读完以保证连接的复用性。笔者在这里给出一个demo:
io.copyn(ioutil.discard, resp.body, 2 << 10)resp.body.close()
以上,就是笔者整理的http1.1的请求流程。
注意笔者本着严谨的态度, 特此提醒:
上述流程中笔者对很多细节并未详细提及或者仅一笔带过,希望读者酌情参考。
总结在go中发起http1.1的请求时, 如果遇到不关心响应的请求,请务必完整读取响应内容以保证连接的复用性。
如果遇到对连接数有限制的业务,可以通过自定义http.client的transport, 并设置transport的maxconnsperhost,maxidleconnsperhost,idleconntimeout和disablekeepalives的值,来控制连接数。
如果对于重定向业务逻辑有需求,可以自定义http.client的checkredirect。
在http1.1,中一个连接上的请求,只有等前一个请求处理完之后才能继续下一个请求。如果前面的请求处理较慢, 则后面的请求必须等待, 这就是http1.1中的线头阻塞。
以上就是go中的http请求之——http1.1请求流程分析的详细内容。