log

1. [2023-07-30 Sun]

1.1. VC infrastructure

In heptapod we have a root group named comp, containg a variety of subgroups. Some of these groups should be public, while others are internal to comp members exclusively. Within each subgroup, we should have the root group members automatically granted privileged access to projects. This is relevant for the startup subgroup in particular, where each project is potentially maintained by multiple non-root contributors.

We also need to consider how we will manage subrepos across the organization. It is about time we start integrating HG bundles and potentially mirrors. For our core VC pipeline we should have no reliance on Git, but this may be difficult. It depends on the behavior of HG bundles.

Bookmarks/tags should be used for milestones in the root group and are infrequent. They are more frequent in projects with a regular release life-cycle.

1.2. Approaching Webapps

I started poking around in the webapp space again so that I can launch a landing page for NAS-T quickly. The Rust situation has improved somewhat on the frontend side, and the axum backend stack is nice.

This might seem like a lot of Rust and not a lot of Lisp, which it is, but there's still room for Lisp wherever we need it. It mostly plays a role in the backend, servicing the database and responding to requests from the Rust edges. All of the important tests for the web APIs are also written in Lisp. We will almost certainly use Lisp for all static processing and HTML generation at compile-time.

This I believe, is the appropriate way to integrate Lisp into a cutting-edge web-app. You get the good parts of Lisp where you need them (interactive debugging, dynamic language, REPL) and avoid the bad parts (OOB optimization, RPS performance) in areas where the customer would be impacted. In this domain, Lisp takes the form of a glue rather than the bricks and mortar it sometimes appears to us as.

2. [2023-10-24 Tue]

2.1. virt

2.1.1. QEMU

2.1.2. KVM

2.1.3. Hyper-V

2.1.4. Firecracker

2.1.5. Docker

2.1.6. Vagrant

2.1.7. LXC

2.1.8. LXD

2.1.9. containerd

2.1.10. systemd-nspawn

2.1.11. VirtualBox

2.2. Concatenative

2.2.1. Factor   factor

  • [2023-07-04 Tue] Factor is a cool concatenative lang but unfortunately the C interface (vm/master.h) no longer exists on the master branch.

2.3. Lisp   lisp

These notes pertain to Lisp. More specifically, ANSI Common Lisp in most places.

2.4. Rust

2.4.1. Serde

  • [2023-07-05 Wed]
    important part of the Rust ecosystem, another dtolnay contribution. If you want to program a data format in the Rust ecosystem, this is how you do it.

    The way it works is that you define some special structs, a Serializer and a Deserializer which implement the Serialize and Deserialize traits provided by serde, respectively.

    You can use these structs to provide your public API. The conventional choice is public top-level functions like from-str and to-string. That's it, your serialization library can now read and write your data format as Rust data types.

    enum-representations

    • the default behavior is an externally tagged representation (verbose)

    The docs use strings as core IO when implementing a custom format, but the convention is to implement for T where T is bound by std::io Read or Write trait. Then you can provide a more robust public API (from_bytes, from_writer, etc).

2.5. C

[2024-09-22 Sun 22:09] <- keys.compiler.company

2.6. CPP

3. [2023-11-01 Wed]

3.1. AWS usage

We're leveraging AWS for some of our public web servers for now. It's really not realistic to expect that my home desktop and spotty Comcast internet can serve any production workflow. What it is capable of is a private VPN, which can communicate with AWS and other cloud VPN depots via WireGuard (article).

I currently use Google Domains for nas-t.net, otom8.dev, and rwest.io - but that business is now owned by squarespace, so I would rather move it to Route53.

We have archlinux ec2 image builds here and here - only half work and not maintained, but it's a start. I'm not even sure if I should stick with arch or cave and use Ubuntu or AWS Linux. We can serve the static services with little cost, the only big spender will be the heptapod instance which requires a larger instance and some workers.

We'll try to keep the cost at or around $30/month.

4. [2023-11-02 Thu]

4.1. IDEAS

4.1.1. shed

rlib > ulib > ulib > ulib > ulib

  1. sh* tools

    shc,shx,etc

4.1.2. packy

  1. rust
  2. common-lisp
  3. emacs-lisp
  4. python
  5. julia
  6. C
  7. C++

4.1.3. tenex

4.1.4. mpk

4.1.5. cfg

4.1.6. obj

split out from rlib to separate package

  • a purely OOP class library

4.1.7. lab

4.1.8. source categories

  • need a way of extracting metadata from a repo
  • need ability to search and query libs/packages
  • separate modules based on where they belong in our stack?
    • app
    • lib
    • script?
    • dist
      • software distros

4.1.9. generic query language

from obj protocol? sql compatibility?

check out kdb

4.1.10. bbdb

insidious Big Brother database.

  • an application built with obj
  • sql

4.1.11. NAS-TV   nas t

  • media streaming
  • gstreamer backend
  • audio/video

5. [2023-11-05 Sun]

5.1. DRAFT dylib-skel-1

  • State "DRAFT" from [2023-11-05 Sun 22:23]

5.1.1. Overview

Our core languages are Rust and Lisp - this is the killer combo which will allow NAS-T to rapidly develop high-quality software. As such, it's crucial that these two very different languages (i.e. compilers) are able to interoperate seamlessly.

Some interop methods are easy to accomodate via the OS - such as IPC or data sharing, but others are a bit more difficult.

In this 2-part series we'll build a FFI bridge between Rust and Lisp, which is something that can be difficult, due to some complications with Rust and because this is not the most popular software stack (yet ;). This is an experiment and may not make it to our code-base, but it's definitely something worth adding to the toolbox in case we need it.

5.1.2. FFI

The level of interop we're after in this case is FFI.

Basically, calling Rust code from Lisp and vice-versa. There's an article about calling Rust from Common Lisp here which shows the basics and serves as a great starting point for those interested.

  1. Rust != C

    The complication(s) with Rust I mentioned early is really just that it is not C. C is old, i.e. well-supported with a stable ABI, making the process of creating bindings for a C library a breeze in many languages.

    For a Rust library we need to first appease the compiler, as explained in this section of the Rustonomicon. Among other things it involves changing the calling-convention of functions with a type signature and editing the Cargo.toml file to produce a C-compatible ABI binary. The Rust default ABI is unstable and can't reliably be used like the C ABI can.

  2. Overhead

    Using FFI involves some overhead. Check here for an example benchmark across a few languages. While building the NAS-T core, I'm very much aware of this, and will need a few sanity benchmarks to make sure the cost doesn't outweigh the benefit. In particular, I'm concerned about crossing multiple language barriers (Rust<->C<->Lisp).

5.1.3. Rust -> C -> Lisp

  1. Setup

    For starters, I'm going to assume we all have Rust (via rustup) and Lisp (sbcl only) installed on our GNU/Linux system (some tweaks needed for Darwin/Windows, not covered in this post).

    1. Cargo

      Create a new library crate. For this example we're focusing on a 'skeleton' for dynamic libraries only, so our experiment will be called dylib-skel or dysk for short. cargo init dysk --lib && cd dysk

      A src/lib.rs will be generated for you. Go ahead and delete that. We're going to be making our own lib.rs file directly in the root directory (just to be cool).

      The next step is to edit your Cargo.toml file. Add these lines after the [package] section and before [dependencies]:

      [lib]
      crate-type = ["cdylib","rlib"]
      path = "lib.rs"
      [[bin]]
      name="dysk-test"
      path="test.rs"
      

      This tells Rust to generate a shared C-compatible object with a .so extension which we can open using dlopen.

    2. cbindgen
      1. install

        Next, we want the cbindgen program which we'll use to generate header files for C/C++. This step isn't necessary at all, we just want it for further experimentation.

        cargo install --force cbindgen

        We append the cbindgen crate as a build dependency to our Cargo.toml like so:

        [build-dependencies]
        cbindgen = "0.24"
        
      2. cbindgen.toml
        language = "C"
        autogen_warning = "/* Warning, this file is autogenerated by cbindgen. Don't modify this manually. */"
        include_version = true
        namespace = "dysk"
        cpp_compat = true
        after_includes = "#define DYSK_VERSION \"0.1.0\""
        line_length = 88
        tab_width = 2
        documentation = true
        documentation_style = "c99"
        usize_is_size_t = true
        [cython]
        header = '"dysk.h"'
        
      3. build.rs
        fn main() -> Result<(), cbindgen::Error> {
          if let Ok(b) = cbindgen::generate(std::env::var("CARGO_MANIFEST_DIR").unwrap()) {
            b.write_to_file("dysk.h"); Ok(())}
          else { panic!("failed to generate dysk.h from cbindgen.toml") } }
        
  2. lib.rs
    //! lib.rs --- dysk library
    use std::ffi::{c_char, c_int, CString};
    #[no_mangle]
    pub extern "C" fn dysk_hello() -> *const c_char {
      CString::new("hello from rust").unwrap().into_raw()}
    #[no_mangle]
    pub extern "C" fn dysk_plus(a:c_int,b:c_int) -> c_int {a+b}
    #[no_mangle]
    pub extern "C" fn dysk_plus1(n:c_int) -> c_int {n+1}
    
  3. test.rs
    //! test.rs --- dysk test
    fn main() { let mut i = 0u32; while i < 500000000 {i+=1; dysk::dysk_plus1(2 as core::ffi::c_int);}}
    
  4. compile
    cargo build --release
    
  5. load from SBCL
    (load-shared-object #P"target/release/libdysk.so")
    (define-alien-routine dysk-hello c-string)
    (define-alien-routine dysk-plus int (a int) (b int))
    (define-alien-routine dysk-plus1 int (n int))
    (dysk-hello) ;; => "hello from rust"
    
  6. benchmark
    time target/release/dysk-test
    
    (time (dotimes (_ 500000000) (dysk-plus1 2)))
    

6. [2023-11-24 Fri]

6.1. cl-dot examples

(defmethod cl-dot:graph-object-node ((graph (eql 'example)) (object cons))
  (make-instance 'cl-dot:node
                 :attributes '(:label "cell \\N"
                               :shape :box)))
(defmethod cl-dot:graph-object-points-to ((graph (eql 'example)) (object cons))
  (list (car object)
        (make-instance 'cl-dot:attributed
                       :object (cdr object)
                       :attributes '(:weight 3))))
;; Symbols
(defmethod cl-dot:graph-object-node ((graph (eql 'example)) (object symbol))
  (make-instance 'cl-dot:node
                 :attributes `(:label ,object
                               :shape :hexagon
                               :style :filled
                               :color :black
                               :fillcolor "#ccccff")))
(let* ((data '(a b c #1=(b z) c d #1#))
       (dgraph (cl-dot:generate-graph-from-roots 'example (list data)
                                                 '(:rankdir "LR" :layout "twopi" :labelloc "t"))))
  (cl-dot:dot-graph dgraph "test-lr.svg" :format #+nil :x11 :svg))
(let* ((data '(a b))
       (dgraph (cl-dot:generate-graph-from-roots 'example (list data)
                                                 '(:rankdir "LR"))))
          (cl-dot:print-graph dgraph))

7. [2023-12-05 Tue]

7.1. global refs

need a way of indexing, referring to, and annotating objects such as URLs, docs, articles, source files, etc.

What is the best way to get this done?

8. [2023-12-09 Sat]

9. [2023-12-12 Tue]

9.1. On Computers

If you've met me in the past decade, you probably know that I am extremely passionate about computers. Let me first explain why.

On the most basic level computers are little (or big) machines that can be programmed to do things, or compute if we're being technical.1

They host and provide access to the Internet, which is a pretty big thing, but they do little things too like unlock your car door and tell your microwave to beep at you. They solve problems. Big or small.

They're also everywhere - which can be scary to think about, but ultimately helps propel us into the future.

There's something pretty cool about that - when you look at the essence of computation. There are endless quantities of these machines which follow the same basic rules and can be used to solve real problems.

9.1.1. The Programmer

Now, let us consider the programmer. They have power. real power. They understand the language of computers, can whisper to them in various dialects. It can be intimidating to witness until you realize how often the programmer says the wrong thing - a bug.

In reality, the programmer has a symbiotic relationship with computers. Good programmers understand this relationship well.

One day after I got my first job at a software company, I remember being on an all-hands meeting due to a client service outage. We had some management, our lead devs, product team, and one curious looking man who happened to be our lead IT consultant who had just joined. He was sitting up on a hotel bed, shirtless, vaping an e-cig, typing away in what I can only imagine was a shell prompt.

After several minutes he took a swig from a bottle of Coke and said "Node 6 is sick." then a few seconds later our services were restored. For the next hour on the call he explained what happened and why, but that particular phrase always stuck with me. He didn't say Node 6 was down, or had an expired cert - his diagnosis was that it was sick.

The more you work closely with computers, the more you start to think of them this way. You don't start screaming when the computer does the wrong thing, you figure out what's wrong and learn from it. With experience, you start to understand the different behaviors of the machines you work with. I like to call this Machine Empathy.

9.1.2. Programs

I already mentioned bugs - I write plenty of those, but usually I try to write programs. Programs to me are like poetry. I like to think they are for the computer too.

Just like computers, computer programs come in different shapes and sizes but in basic terms they are sets of instructions used to control a computer.

You can write programs to do anything - when I first started, my programs made music. The program was a means to an end. Over time, I started to see the program as something much more. I saw it as the music itself.

9.2. On Infra

Something that is missing from many organizations big or large, is an effective way to store and access information, even about their own org.

It can be difficult problem to solve - usually there's the official one, say Microsoft Sharepoint and then the list of unofficial sources which becomes tribal corporate hacker knowledge. Maybe the unofficial ones are more current, or are annotated nicely, but their very existence breaks the system. There's no longer a single source of truth.

My priority in this department is writing services which process and store information from a variety of sources in a distributed knowledge graph. The graph can later be queried to access information on-demand.

My idea of infrastructure is in fact to build my own Cloud. Needless to say I don't have an O365 subscription, and wherever possible I'll be relying on hardware I have physical access to. I'm not opposed to cloud services at large but based on principle I like to think we shouldn't be built on them.

10. [2023-12-23 Sat]

10.1. https://cal-coop.gitlab.io/utena/utena-specification/main.pdf

from the author of cl-decentralise2. draft specification of a Maximalist Computing System.

11. [2023-12-24 Sun]

12. [2023-12-28 Thu]

12.1. useful internals

sb-sys:*runtime-dlhandle*
sb-fasl:+fasl-file-version+
sb-fasl:+backend-fasl-file-implementation+
sb-debug:print-backtrace
sb-debug:map-backtrace
sb-pretty:pprint-dispatch-table
sb-lockless:
sb-ext:simd-pack
sb-walker:define-walker-template
sb-walker:macroexpand-all
sb-walker:walk-form
sb-kernel:empty-type
sb-kernel:*eval-calls*
sb-kernel:*gc-pin-code-pages*
sb-kernel:*restart-clusters*
sb-kernel:*save-lisp-clobbered-globals*
sb-kernel:*top-level-form-p*
sb-kernel:*universal-fun-type*
sb-kernel:*universal-type*
sb-kernel:*wild-type*
sb-kernel:+simd-pack-element-types+
(sb-vm:memory-usage)
(sb-vm:boxed-context-register)
(sb-vm:c-find-heap->arena)
(sb-vm:copy-number-to-heap)
(sb-vm:dump-arena-objects)
(sb-vm:fixnumize)
(sb-vm:rewind-arena)
(sb-vm:show-heap->arena)
(sb-vm:with/without-arena)
(sb-cltl2:{augment-environment,compiler-let,define-declaration,parse-macro})
(sb-cltl2:{declaration-information, variable-information, function-information})
sb-di:
sb-assem:
sb-md5:
sb-regalloc:
sb-disassem:

13. [2024-01-03 Wed]

13.1. SigMF

Sharing sets of recorded signal data is an important part of science and engineering. It enables multiple parties to collaborate, is often a necessary part of reproducing scientific results (a requirement of scientific rigor), and enables sharing data with those who do not have direct access to the equipment required to capture it.

Unfortunately, these datasets have historically not been very portable, and there is not an agreed upon method of sharing metadata descriptions of the recorded data itself. This is the problem that SigMF solves.

By providing a standard way to describe data recordings, SigMF facilitates the sharing of data, prevents the "bitrot" of datasets wherein details of the capture are lost over time, and makes it possible for different tools to operate on the same dataset, thus enabling data portability between tools and workflows.

the-spec: https://github.com/sigmf/SigMF/blob/sigmf-v1.x/sigmf-spec.md

13.2. LibVOLK

Vector-Optimized Library of Kernels (simd)

13.3. /dev/fb*

framebuffers, used by fbgrab/fbcat program

14. [2024-01-04 Thu]

goals: make problems smaller.

sections: why lisp?

  • doesn't need mentioning more and more

15. [2024-01-20 Sat]

15.1. TODO taobench demo

https://github.com/audreyccheng/taobench - shouldn't have missed this :) obviously we need to implement this using core – in demo/bench/tao?

15.3. Dataframe scripting

15.4. Cloud Squatting

15.4.1. Google

15.4.2. Amazon

  • AWS Free Tier

15.4.3. Akamai

  • Linode Free Trial

15.4.4. Oracle

  • OCI Free Tier
    • always free: 2 x oracle autonomous DB
    • 2 x AMD Compute VMs
    • up to 4 x ARM Ampere A1 with 3k/cpu/hr and 18k/gb/h per month
    • block/object/archive storage
    • 30-day $300 credits

16. [2024-01-29 Mon]

16.1. trash as block device

in nushell there is option for rm command to always use 'trash' - AFAIK the current approach is via a service (trashd).

An interesting experiment would be to designate a block device as 'trash' - may be possible to remove reliance on a service

may be an opportunity for ublk driver to shine - instead of /dev/null piping we need a driver for streaming a file to /dev/trash

16.2. compute power

  • mostly x86_64 machines - currently 2 AWS EC2 instances, some podman containers, and our home beowulf server:
  • beowulf:
    • Zor
      • mid-size tower enclosed (Linux/Windows)
      • CPU
        • Intel Core i7-6700K
        • 4 @ 4.0
      • GPU
        • NVIDIA GeForce GTX 1060
        • 6GB
      • Storage
        • Samsung SSD 850: 232.9GB
        • Samsung SSD 850: 465.76GB
        • ST2000DM001-1ER1: 1.82TB
        • WDC WD80EAZZ-00B: 7.28TB
        • PSSD T7 Shield: 3.64TB
        • My Passport 0820: 1.36TB
      • RAM
        • 16GB (2*8) [64GB max]
        • DDR4
    • Jekyll
      • MacBook Pro 2019 (MacOS/Darwin)
      • CPU
        • Intel
        • 8 @
      • RAM
        • 32G DDR4
    • Hyde
      • Thinkpad
      • CPU
        • Intel
        • 4 @
      • RAM
        • 24G DDR3
    • Boris
      • Pinephone Pro
      • CPU
        • 64-bit 6-core 4x ARM Cortex A53 + 2x ARM Cortex A72
      • GPU
        • Mali T860MP4
      • RAM
        • 4GB LPDDR4
    • pi
      • Raspberry Pi 4 Model B
      • CPU
        • Cortex-A72 (ARM v8) 64-bit SoC
        • 4 @ 1.8GHz
      • RAM
        • 8 GB
        • DDR4 4200

17. [2024-02-10 Sat]

17.1. BigBenches

let ms = '1trc/measurements-0.parquet'
dfr open $ms
| dfr group-by  station
| dfr agg [
  (dfr col measure | dfr min | dfr as "min")
  (dfr col measure | dfr max | dfr as "max")
  (dfr col measure | dfr sum | dfr as "sum")
  (dfr col measure | dfr count | dfr as "count")
]

18. [2024-02-18 Sun]

18.1. WL vs X

In the past few months there has been drama regarding Wayland vs X. It seems to be on everyone's minds after Artem's freakout issue and the follow up YT vids/comments.

I admit that it made me reconsider the fitness of WL as a whole - there was a github gist that made some scathing arguments against it.

It's an odd debate though. I think there are many misunderstandings.

So first off, if we look at the homepage https://wayland.freedesktop.org/, Wayland claims it is a replacement for X11. It now has manifest destiny, which in my opinion is a great shame.

X-pros seem to agree that Wayland has manifest destiny - like if you are building softwares that look remotely like a window system, it's a successor to X. That's the model of doing things and there's no way around it.

The disagreement starts with how this destiny - of an X2 - should be fulfilled. X-pros want a fork of X, but it's too late for that. WL-pros want X to run on top of Wayland compositor: https://wayland.freedesktop.org/xserver.html.

Xwayland is a problem for me. From the project description: 'if we're migrating away from X, it makes sense to have a good backwards compatibility story.' Full disclosure: I have never done significant work on Xwayland, so perhaps my opinion is unwarranted. But I have no intention of attempting to maintain a computer system that uses Wayland and X clients at the same time.

To me, X is ol' reliable. Every distro has first-class X support, and it runs on most systems with very little user intervention. Where it doesn't, there is 20+ years of dev history and battle-tested workarounds for you to find your solution in.

Wayland is the new kid on the block, born just in 2008. It's a fresh start to one of the most difficult challenges in software - window systems. A re-write would be pointless though, and so the real value-add is in design. Wayland is designed as a protocol and collection of libraries which are implemented in your own compositor. Coming from Lisp - with ANSI Common Lisp and SRFIs, this feels right even if the implementation is something very different (compositor vs compiler).

With X, it is assumed to be much harder to write an equivalent 'compositor'. Here's the thing though - with a significantly complex X client implementation, it is impossible to replicate in WL. This is really the crux of Artemi's argument in his issue. He asked for a 1:1 equivalent X/WL comparison when no such thing exists, and in my opinion it is a waste of time.

The WL core team is fully aware of this dichotomy, but also that this is in no way a problem or weakness in either system. It means they're different systems, goddammit.

If it was up to me, Xwayland wouldn't exist. I understand why it does, and that it does make things easier for developers who need to support both, and users who have multiple apps with multiple windowing requirements. It's a bandaid though, and one that is particularly dangerous because it re-enforces the idea that Wayland is just X2 and that they're fully compatible.

What interests me in the Wayland world right now is the idea of a small, modular, full-stack Wayland compositor API. There are several 'kiosk' based compositors for single applications (cage), but these aren't complete solutions. It is possible to get much closer to the metal, and that's where I want to be so that I can build my own APIs on top - I don't want to live on top of X, and I certainly don't want to live on top of X on top of WL. I want a pure solution that hides as little as possible, exposing the interesting bits.

19. [2024-03-01 Fri]

19.1. TODO collect more data

20. [2024-03-02 Sat]

20.1. On blocks and devices

/dev In Linux, everything is a file.

dev contains special device files - usually block or character device.

major, minor = category, device 0, 5

mknod - create special device files

redhat hints

dd if=/dev/zero of=myfile bs=1M count=32
losetup --show -f myfile
ls -al /dev/loop0
losetup -d /dev/loop0 #teardown
echo "sup dude" > /dev/loop0
dd if=/dev/loop0 -bs=1
dd if=/dev/nvme0 of=/dev/null progress=true
#pacman -S hdparm
hdparm -T /dev/nvme0
modprobe scsi_debug add_host=5 max_luns=10 num_tgts=2 dev_size_mb=16

sparsefiles: create with C, dd, or truncate

truncate --help

test mkfs.btrfs on 10T dummy block device

dd if=/dev/zero of=/tmp/bb1 bs=1 count=1 seek=10T
du -sh /tmp/bb1
losetup --show -f /tmp/bb1
mkfs.btrfs /dev/loop0

diagnostics

iostat # pacman -S sysstat
blktrace # paru -S blktrace
iotop # pacman -S iotop

bcc/ trace: Who/which process is executing specific functions against block devices?

bcc/biosnoop: Which process is accessing the block device, how many bytes are accessed, which latency for answering the requests?

at the kernel level besides BPF we got kmods and DKMS,

compression/de-duplication can be done via VDO kernel mod

https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support

20.2. save-lisp-and-respawn

sb-ext:*save-hooks*

20.3. syslog for log

sb-posix:

  • openlog syslog closelog
  • levels: emerg alert crit err warning notice info debug
  • setlogmask

21. [2024-03-13 Wed]

21.1. RESEARCH sbcl-wiki

21.2. IR1

21.3. IR2

22. [2024-03-17 Sun]

22.1. DB Benchmarking

22.2. packy design

22.2.1. Lib

  1. Types
    1. Pack

      Primary data type of the library - typically represents a compressed archive, metadata, and ops.

    2. Bundle

      Collection data type, usually contains a set of packs with metadata.

    3. PackyEndpoint

      Represents a Packy instance bound to a UDP socket

    4. PackyEndpointConfig

      Global endpoint configuration object

    5. PackyClientConfig

      Configuration for outgoing packy connections on an endpoint

    6. PackyServerConfig

      Configuration for incoming packy connection son an endpoint

    7. PackyConnection

      Packy connection object

  2. Traits
    1. PackyClient
      1. query
      2. install
      3. update
      4. login
      5. logout
      6. pull
      7. push
    2. PackyServer
      1. start_packy_server
      2. stop_packy_server
      3. start_packy_registry
    3. PackyRegistry
      1. register_pack
      2. register_user
      3. register_bundle

23. [2024-03-25 Mon]

23.1. TBD investigate alieneval for phash opps

24. [2024-04-19 Fri]

24.1. How it works

The backend services are written in Rust and controlled by a simple messaging protocol. Services provide common runtime capabilities known as the service protocol but are specialized on a unique service type which may in turn register their own custom protocols (via core).

Services are capable of dispatching data directly to clients, or storing data in the database (sqlite, postgres, mysql).

The frontend clients are pre-dominantly written in Common Lisp and come in many shapes and sizes. There is a cli-client, web-client (CLOG), docker-client (archlinux, stumpwm, McCLIM), and native-client which also compiles to WASM (slint-rs).

24.2. Guide

24.2.1. Build

  • install dependencies

    ./tools/deps.sh
    
  • make executables
    Simply run make build. Read the makefile and change the options as needed.
  • Mode (debug, release)
  • Lisp (sbcl, cmucl, ccl)
  • Config (default.cfg)

24.2.2. Run

./demo -i

24.2.3. Config

Configs can be specified in JSON, TOML, RON, or of course SEXP. See default.cfg for an example.

24.2.4. Play

The high-level user interface is presented as a multi-modal GUI application which adapts to the specific application instances below.

  1. Weather

    This backend retrieves weather data using the NWS API.

  2. Stocks

    The 'Stocks' backend features a stock ticker with real-time analysis capabilities.

  3. Bench

    This is a benchmark backend for testing the capabilities of our demo. It spins up some mock services and allows fine-grained control of input/throughput.

24.3. tasks

24.3.1. TODO DSLs

  • consider tree-sitter parsing layout, use as a guide for developing a single syntax which expands to Rust or C.
  • with-rs
  • with-c
  • with-rs/c
  • with-cargo
  • compile-rs/c
  1. TODO rs-macroexpand
    • rs-gen-file
    • rs-defmacro
    • rs-macros
    • rs-macroexpand
    • rs-macroexpand-1
  2. TODO c-macroexpand
    • c-gen-file h/c
    • c-defmacro
    • c-macros
    • c-macroexpand
    • c-macroexpand-1
  3. TODO slint-macroexpand
    • slint-gen-file
    • slint-defmacro
    • slint-macros
    • slint-macroexpand
    • slint-macroexpand-1
  4. TODO html (using who)

24.3.2. TODO web templates

create a basic static page in CL which will be used to host Slint UIs and other WASM doo-dads in a browser.

24.3.3. TODO CLI

using clingon, decide on generic options and write it up

24.3.4. TODO docs

work on doc generation – Rust and CL should be accounted for.

24.3.5. TODO tests

We have none! need to make it more comfy - set up testing in all Rust crates and for the lisp systems.

25. [2024-04-25 Thu]

26. [2024-07-31 Wed]

26.1. alpine builders

  • make sure to apk add:
    • git, hg
    • clang
    • make
    • linux-headers
    • zstd-dev
    • libc-dev?

27. [2024-08-04 Sun]

27.1. bookmarks

  • How should such objects be represented within CORE?
  • skel/homer mostly
    • already have alias
  • not sure about obj/otherwise, prob not

28. [2024-08-08 Thu]

28.1. Intelligent Design in Software

  • starting from a space where there are no external influences - a biome
  • answer questions regarding the nature of the software and its capabilities
  • incrementally adjust inter-dependencies
  • optimize
  • protect the biome at all costs
  • focus on composition
  • build applications
  • re-integrate lessons learned

29. [2024-08-16 Fri]

29.1. keys.compiler.company

[2024-09-22 Sun 22:09] -> C

Footnotes:

1

… perform computations