adding realtime to your projects
TRANSCRIPT
MADRID · NOV 27-28 · 2015
I write software at limenius.com
3/4 of them need some form of Real Time
We build web and mobile apps for clients
We have tried many things in different battles
This year we abstracted carotene-project.com and opensourced it
MADRID · NOV 27-28 · 2015
Under this hypothesis
1%
99%
Produced before requestProduced right now: 1%?
Probably more like 0.0000001%
Human Knowledge
MADRID · NOV 27-28 · 2015
Corollary: If something important happens after the HTTP request, it will be filtered and
solidified in Knowledge and preserved for ages! Come again next month.
MADRID · NOV 27-28 · 2015
Process a Request,
call a program
serve a Response
Serve a Document
Under the document paradigm
Short lived processes
Under the document paradigm
MADRID · NOV 27-28 · 2015
CGI was bornAnd then…
PHP
Rails Django
Symfony
Wordpress
MediaWiki
SpringCakePHP
Perl-cgi
Code IgniterLaravel
MADRID · NOV 27-28 · 2015
25 years have passedHumanity has been exposed to huge loads of information
Time to check the enlightement hypothesis
What do people express more interest at?
MADRID · NOV 27-28 · 2015
Possibility 1
(Has chances of doing something meaningful and be rememebered)
MADRID · NOV 27-28 · 2015
Not what it was built forRequest overhead
Granularity compromise: Requests/min vs Resources
Chaining events: waiting times add up. IoT capable?
Like asking for a video one frame at a time
MADRID · NOV 27-28 · 2015
Soft Real Time
Reasonable response time
Normally based on human perception
MADRID · NOV 27-28 · 2015
WebSockets, what are theyHack (Standarized, but hack) to turn HTTP back into a TCP full-duplex connection and call it “upgrade”.
GET /chat HTTP/1.1 Host: server.example.com Upgrade: WebSocket Connection: Upgrade Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ== Origin: http://example.com Sec-WebSocket-Version: 13
HTTP/1.1 101 Switching Protocols Upgrade: WebSocket Connection: Upgrade Sec-WebSocket-Accept: s3pPL<…>Oo=
MADRID · NOV 27-28 · 2015
But Long polling…Make an Ajax Request, leave it open waiting for a Response.
Introduces complexity: group chained requests into “connections”.
Needs buffering handling.
Adds complexity to support the remaining 11.83%.
MADRID · NOV 27-28 · 2015
Three layers-model
Transport
Messaging
Data Sync
Regular HTTP
HTTP
Request/Response
Probably your work
Real Time
Websockets (or Long polling)
Publish/Subscribe (or RPC)
Probably your work
MADRID · NOV 27-28 · 2015
There is a fourth layer
Everything our tech model cannot handle, our brain
has to make up for it
MADRID · NOV 27-28 · 2015
F5F5F5F5F5F5F5F5F5F5F5F5F5F5F5!!!!!!!
Human dealing with a protocol that doesn’t map reality well
MADRID · NOV 27-28 · 2015
Three layers-model
Transport
Messaging
Data Sync
Core of the product
Where you add value
Implemented in a language you know well
MADRID · NOV 27-28 · 2015
Three layers-model
Transport
Messaging
Data Sync
Implemented in a technology very performant
Don’t mix your business logic with it
MADRID · NOV 27-28 · 2015
Aim for
Transport
Messaging
Data Sync
Regular HTTP
Your work
Real Time
Your work
Nginx/Apache RT Server
MADRID · NOV 27-28 · 2015
DIY approach (typically socket.io)Tutorial looks super easy, but:
Soon you realise that you must deal with messaging details (who is in a channel? how to do auth?).
Multi-server environments are up to you (how to know if a user is connected in a channel in any server?).
Tendency is to duplicate code between socket.io realm and your business, fight against this.
Transport & messaging may be conditioning the technology you use for your business logic
MADRID · NOV 27-28 · 2015
Also, may I suggest?
If you are going this path, check out Erlang/Elixir. (btw, this is what we did)
MADRID · NOV 27-28 · 2015
OptionsPAAS. Data-centric (talk to a NoSQL DB).
OpenSource. Extendible with node.js or Ruby code.
PAAS. PubSub channels. RT CDN Infrastructure. Lots of SDKs in different languages. Black box oriented.
OpenSource. PubSub channels. Black box oriented.
OpenSource. PubSub (&RPC). Platform oriented.
MADRID · NOV 27-28 · 2015
Black Box ApproachRT server like Nginx or Apache (no need to care wether Nginx is written in C or Haskell?).
Conflict zones:
Authentication and Authorization: Implement in RT server vs in your logic.
Communication between regular HTTP server in HTTP Request/Response Fashion.
IMHO this makes sense.(You may have other opinion and that is cool)
MADRID · NOV 27-28 · 2015
Publish to a channel
Carotene.publish({channel: "mychannel", message: "Hello world!"});
channel.trigger("message", "Hello world!");
pubnub.publish({ channel : "mychannel", message : "Hello world!", callback: function(m){ console.log(m); } });
Carotene
Pusher
PubNubAnything JSON-Serializable
MADRID · NOV 27-28 · 2015
Carotene
Subscribe to a channel
Carotene.subscribe({channel: "mychannel", onMessage: function(message) { console.log(message); } });
Pusher
PubNub
channel.bind('my-event', function(data) { console.log(data.message); });
PUBNUB.subscribe({ channel: 'my-channel', message: function(m){console.log(m)} });
MADRID · NOV 27-28 · 2015
Authentication: Who are you?
Carotene.authenticate({ userId: "some_user_identifier", token: "token_for_this_user" });
Carotene
Your server generates this Check out JWT
[{carotene, [ % ... Other configuration options {authenticate_url, "http://mybackend.com/authenticate_carotene/"} }]}
Your server receives a POST request
MADRID · NOV 27-28 · 2015
[{carotene, [ % ... Other configuration options {subscribe_authorization, [ {level, anonymous} ]}, }]}
Anonymous allowed
Authorization: Can you do that?
MADRID · NOV 27-28 · 2015
[{carotene, [ % ... Other configuration options {subscribe_authorization, [ {level, authenticated} ]}, }]}
Only authenticated allowed
Authorization: Can you do that?
MADRID · NOV 27-28 · 2015
[{carotene, [ % ... Other configuration options {level, ask}, {authorization_url, “http://mybackend.com/authorize-subscribe"} ]}, }]}
Let your logic decide
Publishing follows the same pattern
Authorization: Can you do that?
MADRID · NOV 27-28 · 2015
RT
#subscribe({channel: “chat”})
#subscribe({channel: “chat”})
# subscribe({ch
annel: “chat”})
MADRID · NOV 27-28 · 2015
RT
#publish({channel: “chat”, msg:”hi”})
#message({channel: “chat”, msg: “hi”})
# message({channel: “c
hat”, msg: “h
i”})
message({channel: “chat”, msg: “hi”})
MADRID · NOV 27-28 · 2015
RT
#
#message({channel: “chat”, msg: “hi”})
# message({channel: “c
hat”, msg: “h
i”}) !
publish({channel: “chat”, msg:”hi”})
message({channel: “chat”, msg: “hi”}) message({ channel: “chat”, msg: “hi”})
MADRID · NOV 27-28 · 2015
# subscribe({channel: “chat”})
#subscribe({channel: “chat”})
#
subscribe({ch
annel: “chat”})
RT
!
MADRID · NOV 27-28 · 2015
RT
!
#
#
#$message({ch
annel: “chat”,
msg:”hi”})
message({channel: “chat”, msg:”hi”})
message({channel: “chat”, msg:”hi”})
publish({channel: “chat”, msg:”hi”})
Ajax
publish({ channel: “chat”, msg:”hi”})
Http
MADRID · NOV 27-28 · 2015
RT
#[Id: 9]
subscribe({channel: “user-9”})
!%!
presence({channel: “u
ser-9”})
presence([9]})
auth({user: 9, token: <token>})
& '
MADRID · NOV 27-28 · 2015
RT
#[Id: 9] !
send({channel: “user-9”,
message: <notification>})
message({channel: “user-9”,
message: <notification>})
& '
MADRID · NOV 27-28 · 2015
RT
#subscribe({channel: “page”})
#subscribe({channel: “score”})
# subscribe({ch
annel: “score”}) !
MADRID · NOV 27-28 · 2015
RT
#
#message({channel: “score”, msg: “1:1”})
# message({channel: “s
core”, msg: “1
:1”}) !
publish({ channel: “score”,
msg: “1:1” })
regular HTTP
message({channel: “score”, msg: “1:1”})
This case scales very well (no need to store much state in RT)
MADRID · NOV 27-28 · 2015
Do your testsVery different possible scenarios. Benchmarks tend to test the simplest case and brag about the number of connections. (I like to do this too :).
Set up a load test with Tsung or Erlang+Gun to simulate your use case:
• Expected #connections per channel • Expected #msgs published per user
MADRID · NOV 27-28 · 2015
Thanks!@nacmartin
http://carotene-project.com
http://limenius.com