110 views
# Notes from MOQ Charter review - 2022/04/22 ## [Issue 26](https://github.com/moq-wg/moq-charter/issues/26) - make clear support at least direct on top of QUIC and on top of webtransport * Considerable discussion, but we seem to have converged on not specifying whether this is QUIC streams, datagrams, or both in charter * We should be doing some engineering before locking in design choices Christian: Issue with streams with relays, head of line blocking still happens at every node, and the end to end jitter is much higher when in stream design than datagram design. Datagrams give much less jitter. Vidhi Goel: Streams should not be used for out of order sending - better to just use Datagram. Would be good to better explain uses cases covered by this. James: Leave to WG to figure out later. Jonathan: Need to do experiments with streams, vs many streams, vs datagram Victor: Streams would work as well as Dash does today. Christian: big difference between defining a new chart Cullen: View it likely we will eventually support both Datagram and Stream for various uses and we should do both from the start. Jonathan: Like to see some running code. Conclusion: Not going with one vs the other, need to support both, driven by application on top. ## Spencer's metaquestion - does MOQ include both interactive and live media? Apparently people on this call were remembering that the agreement in the room at the MOQ BOF was not what Spencer was remembering (and we didn't consult [the minutes of the MOQ BOF](https://datatracker.ietf.org/meeting/113/materials/minutes-113-moq-00)) So, both interactive and live, it is, Spencer supposes, at least until he consults the minutes of the MOQ BOF. ## [Issue 10](https://github.com/moq-wg/moq-charter/issues/10) - make clear support at least direct on top of QUIC and on top of webtransport We should specify the mapping(s) we expect to support in the charter Victor: Some uses cases need WebTransport in browsers. Want WebTransport to be in. Jonathan: Want to specify usable from JavaScript in browser James: Worth specifying in WebTransport Cullen - we should be crisp on whether "running in a browser" means "in a modified browser, coordinated with W3C" or "in an unmodified browser" Ted: Browsers will change as this work goes on. Victor: WebTransport largely done but another year of work at W3C for things like priority and congestion control. Not limited but we do have a good sense of direction from what is done or in progress today Jonathan: A JS API is not likely to expose low level details from quic stack like if a frame was ACK'd. Need to constrain our solution to things that API can reasonably do. Suhas: We could ask for API to expose some sort of information about status and statistics of QUIC stack. Victor: There is an open issue. Lack of progress because no web developers asking for this. Possible to do something like a ENUM that ask for real time but don't yet have specific requriements. Mo: Charter text should say it works over browser API. Another item that says WG may liase with groups to requests changes. But ore solution should work with modern browsers. Moq will be just another thing that causes optimiations to made to lower layers and APIs. Summary: WG should not block on work to be done on browser. Work with the modern APIs they provide as this work moves forward. We may go and ask for optimizations. Work will support both browser and non browsers applications. ## [Issue 11](https://github.com/moq-wg/moq-charter/issues/11) - Media Container Agility Cullen: Looking for flexibility for very low bitrate audio Victor: Allow any container and specify at least one mandatory to implement Mo: Payload Format is better way than container. Do we expect interoperability between peers? The fundemntal questions is sender and receiver allways under controll of same developer. View is media format should be standardized and should be possible for one vendor to write a client that interacts with another vendors server. Victor: For live ingestion, want at least one standardized container that is close to what is used today in broadcast applicaitons. Things like OBS can work with many services. Need to pick one container to have interoperability across a wide range of players Conclusion: In favour of agility but need way to negotiate the container in use and have a mandatory to implment container to ensure interoperabilyt. ## Follow-ups Spencer: Spencer set up a weekly email summarizing github stuff to email list. If something is a large meta issue, better to take it to the list. Major decisiong to the email list. * review issues on github * conversation to continue on mailing list and github ## Webex chat logs from Alan Frindell to everyone: 9:12 AM Does webex order the hands or should we use +q -q in the chat? from Jörg Ott to everyone: 9:13 AM Main point: don't prefer one over the other in the charter. from Alan Frindell to everyone: 9:14 AM I hear what Christian is saying, but a relay can be configured to forward stream data out of order as well from Spencer Dawkins to everyone: 9:15 AM By the way, is anyone taking notes? "-) from Suhas Nandakumar to everyone: 9:15 AM Cullen volunteered, but Spencer can you help too ? from Alan Frindell to everyone: 9:15 AM Ok I hear that too from Spencer Dawkins to everyone: 9:15 AM Where are we taking notes? from Suhas Nandakumar to everyone: 9:16 AM personal notebook for now :-) we didn't plan on comid or hedgedoc from Alan Frindell to everyone: 9:16 AM Streams offer quite a bit of functionality out of the box from Jörg Ott to everyone: 9:17 AM quite a bit of which you may not necessarily need from Luke to everyone: 9:17 AM You can have a stream per frame; the idea is to reuse QUIC fragmentation and reassembly functionality from Alan Frindell to everyone: 9:17 AM also loss detection and retransmission from james h to everyone: 9:18 AM +1 from Alan Frindell to everyone: 9:18 AM +1 James to not bake it into the charter from Jörg Ott to everyone: 9:18 AM +1 from Hang Shi to everyone: 9:18 AM + 1 to James from Max Sharabayko to everyone: 9:18 AM +1 from Spencer Dawkins to everyone: 9:19 AM +1. This is about two layers of detail too low for a charter. from Vidhi Goel to everyone: 9:20 AM Do current implementations of distribution protocols use UDP? from Ted Hardie to everyone: 9:20 AM I am not sure "inevitable" is the word I'd use here. from Vidhi Goel to everyone: 9:24 AM I am fine with it. from Spencer Dawkins to everyone: 9:24 AM To all - I'm taking notes in https://notes.ietf.org/QKPghI6JQNegkNIf5ZA7rw, but I'll be capturing agreements (or lack thereof), not point-by-point discussions from Jörg Ott to everyone: 9:25 AM I don't think this outcome is inevitable from Christian Huitema to everyone: 9:25 AM For example, the QUIC stack already knows which datagrams were received and which not -- datagram frames elicit ACK. It is straightforward to pass the information from QUIC to application, and let the application decide whether to resend the data or not based on media state. from Jörg Ott to everyone: 9:26 AM Right, this is what our current RTP mapping exploits from Jonathan Lennox to everyone: 9:26 AM Well, insofar as designing good transport APIs is "straightforward", which it isn't. Which is also related to the question of whether we want this to be able to work over WebTransport. from Luke to everyone: 9:26 AM Datagrams are not inevitable, but I would say a stream per frame is required to drop non-reference frames from Luke to everyone: 9:27 AM and that may be required for real-time latency use cases; not sure from Alan Frindell to everyone: 9:27 AM RUSH uses stream per frame and is deployed widely from Victor Vasiliev to everyone: 9:28 AM RUSH, Warp and DASH/H3 work well, but they might not work for lower latency targets from Christian Huitema to everyone: 9:28 AM When the frame is many packets (e.g. I-Frames) then one can see head-of-line blocking on the stream itself, and then cumulative jitter accumulating in media relays. from Victor Vasiliev to everyone: 9:29 AM For what it's worth, I believe doing audio over datagrams (with no RTX) as an extension is worthwhile from the get-go from Jonathan Lennox to everyone: 9:29 AM But wouldn't the same thing happen if you were fragmenting an I-Frame over datagrams? If you don't have the whole I-Frame, you can't move forward in the decoder. from Cullen Jennings to everyone: 9:29 AM @Jonathan - fair - I guess we probably do have a fair amount of running code today that we could look at from Christian Huitema to everyone: 9:30 AM If you are fragmenting I-Frames over datagram, you can do reassembly end-to-end, and thus avoid head-of-line blocking. from Lucas Pardue to everyone: 9:30 AM I don't think *QUIC* implementers care what the data on streams is from Vidhi Goel to everyone: 9:31 AM I agree with this conclusion from Spencer Dawkins to everyone: 9:31 AM https://notes.ietf.org/QKPghI6JQNegkNIf5ZA7rw?edit from Victor Vasiliev to everyone: 9:33 AM From my perspective, "supporting datagrams" is a concept too vague to be useful from Roni Even to everyone: 9:34 AM there is difference between the required latency cases. In veideo conferencing over UDP the issue of loss is addreseed in the RTP layer using sender side by FEC or receiver side by sort of AI. so retransmision is not mandatroy in order to reduce latency. from Roni Even to everyone: 9:35 AM for audio there is always the option of the lsitner to say "what did you say" from Christian Huitema to everyone: 9:39 AM Is there any desire to reference what MASQUE does. from Christian Huitema to everyone: 9:40 AM Like the HTTP3 "capsule" work? from Lucas Pardue to everyone: 9:41 AM only indirectly. I don't think this WG would have difficult dependencies on MASQUE or capsule from Spencer Dawkins to everyone: 9:55 AM When we come to the end of the call, can someone grab the chat log before it goes away? A lot of discussion here isn't showing up in the notes (and I'm not complaining about that), but is worth capturing as well. from james h to everyone: 9:56 AM definitely want interoperability, but maybe framing/container/whatever can be negotiated as part of some capabilities exchange from Victor Vasiliev to everyone: 9:57 AM The answer here is "we need interoperability"; because at least for ingestion side, we expect a lot of different implementations from james h to everyone: 9:57 AM +1 to victor from Jörg Ott to everyone: 9:58 AM +1 from Spencer Dawkins to everyone: 9:59 AM To the convenors - how do we follow up on this discussion?