Skip to content

Commit 4eb1072

Browse files
author
Steve Krouse
committed
## May 23, 2019
* TOC {: toc } Went to bed too late last night and so am only getting to research at 11:30am this morning. Cleaned up my inbox and planned the day 11-11:30am. It's sunny for the third day in a row here in London so I may go for a run today. Also have to do some curriculum research for The Coding Space (my old company, which I'm now consulting for). But the most of today (5 hours) should be on research! ### My other remaining research todos 5/23/19 Finishing up previous tasks... I looked up [https://github.com/stevekrouse/futureofcoding.org/projects/3](https://github.com/stevekrouse/futureofcoding.org/projects/3) and it seems like I mostly forgot about it when I switched over to Workflowy but that's OK. Those tasks are hanging around in the backlog which is fine. It [seems like Unison](http://unisonweb.org/2016-09-17/editing.html) has a pretty straightforward story for immutable code editing. Similar to IPFS in spirit. One difference is that it doesn't seems as focused on the FRP side of things; no time-varying values as far as I can see. It seems like Paul has also had thoughts about [IoT/heterogenous computing networks](http://unisonweb.org/2016-05-18/iot.html) which are somewhat similar to my recent [OS thoughts about expressivity over hardware](https://futureofcoding.org/log#expressivity-over-hardware). Again, without FRP. It's more related to algebraic effect handlers, which I think Conal would say are like monads in that they import the imperitive way of thinking into FP instead of building better abstractions on top. I shot Paul a text to catch up because I think his thinking here would be really helpful at this stage. **consider helping REBLS with publicity.** I'm leaning no here but find myself reluctant to fully say no given how little time it seems. I'm going to tell JE I want to say no, and double check with him one last time. --> put this in next JE agenda **Think about research abstract vs concrete. I don’t want to solve a small problem but I do need a more specific thesis or angle than I have now. I also want big thoughts and lots of reading. And broad reading like mcluhan and alan kay / curious reader / risd class stuff. Type theory (books I have in kitchen) too but not just papers from recent conferences.** - where do I want to go with my OS stuff (which I continue with an example below)? what are the next steps? - chat it over with Paul C - try and encode it into pseudo-Haskell - [Expressivity over hardware](https://futureofcoding.org/log#expressivity-over-hardware) remaining questions - try to model social applications with it and [answer the 8 questions for this](https://futureofcoding.org/drafts/statement#questions), including the encapsulated app that you yeild a horizontal slice to and then it exposes you some local stuff or maybe doesn't and just does it on the hardware you already yielded to it like a MMO game would pull from your controller hardware and yield to the screen real estate you gave it. - are their general community organizing projects I want to do? - new Slack - export Slack with search and permalinks - edit and publish podcasts - organizing community info a la the risd course - online meetup thing - business of future of coding aidan conference - juan benet / protocol labs conference - what's the reading I want to do? - basic maths / category theory - type theory (textbooks I have plus maybe dependent types, algebraic effects, a few others) - mcluhan and other kay, curious reader, risd stuff - also /projects/2 In summary, I want to continue with my OS thinking (let's see what JE has to say about it tomorrow), and fit everything else in with a holistic plan for balancing my podcast, community projects, broader reading. I want to prioritize things, allocate a certain amount of time the various projects, and go for it. I'm going to let this meta-planning project remain undone for now and continue muddling along, organizing what to work on on a week or bi-weekly basis, mostly pulling on memory and emotion to allocate time to various projects. It's a reasonable heuristic for now. ### An Example of a Hardware-expressive OS Reading about Unison is actually a great contrast and counterpoint to spur on my current "always running" and OS-focused thinking. Let's specify something simple that's inexpressible today: *this portion of my computer screen should be the live value of my phone's front facing camera*. Here are some questions: 1. What is the camera output data's type? I imagine the camera captures it in form X and my computer screen (or my computer's GPU) needs it in form Y, and potentially we "send it over the wire" most efficiently in form Z, but there's an intermediate abstract representation that's denotationally a `Behavior (x, y) -> Color` which we mostly talk about. For example size transformations would happen on the intermediate representation to get it into the right shape for my computer screen. 2. How do we specify the specific camera? One way is to specify the path to get to it: look for it on wifi, bluetooth, etc. Another way is to specify it's unique name and "ask around" for the path ("have you seen this camera?"). These are isomorphic really. 3. What if the camera is offline? We can imagine an operation `search Id -> Maybe Camera`, where `Id` can be a name or path or other identifying information. A `Camera` would in theory expose a `Behavior Image`, `Behavior Zoom`, possibly a `Behavior Focus`, and an `Event Snapshot`. 4. What if the camera is on a low-bandwidth connection? How do we deal with "dropped frames"? The simplest representation is that my computer's `Behavior Image` is always the last thing it got from the camera. If there's a lag in the connection, the image stays frozen and then when the connection goes through again the image skips to the newest value it gets (skipping the intermediate frames). Another way to model it would be a `Behavior (Maybe Image)` where we can react to when we're getting `Nothing`s from the camera for some reason. Ultimately, we would probably denotationally model the receiving of this video data as an `Event (t, Image)` where the `Event`'s time is when the Image was received and the `t` is when was recorded. (We can also model this as an `Event (Event Image)`.) It's then up to us to decide how to apply flow combinators to reduce this Event to a Behavior of various types. Ultimately we need a `Behavior Image` for the screen, but there are many possible algorithms to get there. For example, we could filter out any images with `t`'s smaller than that which we've already displayed so as to never go back in time. We could also encode logic to "speed through" lagged video to "catch us up" to the current feed. 5. How do we specify the output of my computer screen? Ultimately my computer screen needs a single `Image`, denotationally `(x, y) -> Color` to display. We can give it a single `Behavior Image` as well and it can sample it ~30fps and get its required `Image`. We can construct our `Behavior Image` from as many sources as we want, splitting the screen up into sections and composing them together. 6. How do we make sure only we (and people we authorize) can access the camera? The first way to do this is to only hook it up to computers or networks we control, such as our password-protected wifi network. But the generalized way to do this is to only have our devices expose their data streams encrypted with our public key. For multiple people to have read access, you can instruct the camera to expose multiple encrypted versions of its data, one for each public key. (The IPFS scheme would be that each device would have its own public/private key pair, so I reference cameras by their public key and could gain the ability to read a particular camera by having that camera's private key. So access is gained by obtaining a private key as opposed to exposing a new encrypted stream for another person's public key.) 7. What if we also want to store the movie we are receiving from the camera on my computer for playback at a later date? My computer's disk can be denotationally modeled as `Behavior [Bytes]` (maybe with dependent types to encode the number of bytes into the type). We could image sending our `Behavior Image` to disk in the same way we send it to the screen; the main difference would be accounting for what happens when we run out of disk space. 8. How do we model coding this up in an immutable way? The camera's `Behavior Image` changes internally. That is, the camera is quite immutable, while the `Behavior` type allows for the image it exposes to change over time. The screen's `Behavior Image` also allows for changes internally (different images over time), but it must also allow for *external changes to how its computed*. The problem is greatly simplified because the scope of these external changes are limited by the fact that the screen requires a `Behavior Image`. Thus they won't actually be changing the type of the definition. The simplest way to model this is to have an `Event (Behavior Image)` where each event occurrence signifies the new definition for the screen's output. We'd simply apply the `switcher` combinator to obtain a `Behavior` from this `Event Behavior`. But how we produce `Event (Behavior Image)`? For simplicity let's give my screen a public/private key pair like in IFPS. Then we could define the event as all occurrences of signed `Behavior Image`s on various channels, such as over wifi, bluetooth, ethernet, a blockchain, HTTP server path, etc. However this answer feels a bit like a cop out: instead of modeling a changing definition for the screen we merely model a static definition that accepts new definitions in a specified channel. In this way we decouple the output of the screen in a non-definitional way. For example, nothing would prevent or coordinate multiple entities from writing to the screen-channel at the same time, producing a jarring, glitchy image output. If we want a truly orderly, definitional approach the screen's definition must fully point towards all dependencies, while also be able to change over time to point to different dependencies, but be somehow immutable. On second thought, we probably do want to decouple these things and it should be up to the programmer to only give out the appropriate private key to the "last step" in the computation so as to avoid multiple sources trying to overwrite each other.
1 parent 7454136 commit 4eb1072

File tree

0 file changed

+0
-0
lines changed

    0 file changed

    +0
    -0
    lines changed

    0 commit comments

    Comments
     (0)