log
1.
1.1. VC infrastructure
In heptapod we have a root group named comp
, containg a variety of
subgroups. Some of these groups should be public, while others are
internal to comp members exclusively. Within each subgroup, we should
have the root group members automatically granted privileged access to
projects. This is relevant for the startup
subgroup in particular,
where each project is potentially maintained by multiple non-root
contributors.
We also need to consider how we will manage subrepos across the organization. It is about time we start integrating HG bundles and potentially mirrors. For our core VC pipeline we should have no reliance on Git, but this may be difficult. It depends on the behavior of HG bundles.
Bookmarks/tags should be used for milestones in the root group and are infrequent. They are more frequent in projects with a regular release life-cycle.
1.2. Approaching Webapps
I started poking around in the webapp space again so that I can launch a landing page for NAS-T quickly. The Rust situation has improved somewhat on the frontend side, and the axum backend stack is nice.
This might seem like a lot of Rust and not a lot of Lisp, which it is, but there's still room for Lisp wherever we need it. It mostly plays a role in the backend, servicing the database and responding to requests from the Rust edges. All of the important tests for the web APIs are also written in Lisp. We will almost certainly use Lisp for all static processing and HTML generation at compile-time.
This I believe, is the appropriate way to integrate Lisp into a cutting-edge web-app. You get the good parts of Lisp where you need them (interactive debugging, dynamic language, REPL) and avoid the bad parts (OOB optimization, RPS performance) in areas where the customer would be impacted. In this domain, Lisp takes the form of a glue rather than the bricks and mortar it sometimes appears to us as.
2.
2.1. virt
2.1.1. QEMU
2.1.2. KVM
2.1.3. Hyper-V
2.1.4. Firecracker
2.1.5. Docker
2.1.6. Vagrant
2.1.7. LXC
2.1.8. LXD
2.1.9. containerd
2.1.10. systemd-nspawn
2.1.11. VirtualBox
2.2. Concatenative
2.3. Lisp lisp
These notes pertain to Lisp. More specifically, ANSI Common Lisp in most places.
- https://github.com/lispnik/iup/ - doesn't support MacOS yet, looks
cool though
- what we really need is wasm compiler.. TBD
2.4. AWS usage
We're leveraging AWS for some of our public web servers for now. It's really not realistic to expect that my home desktop and spotty Comcast internet can serve any production workflow. What it is capable of is a private VPN, which can communicate with AWS and other cloud VPN depots via WireGuard (article).
I currently use Google Domains for nas-t.net, otom8.dev, and rwest.io - but that business is now owned by squarespace, so I would rather move it to Route53.
We have archlinux ec2 image builds here and here - only half work and not maintained, but it's a start. I'm not even sure if I should stick with arch or cave and use Ubuntu or AWS Linux. We can serve the static services with little cost, the only big spender will be the heptapod instance which requires a larger instance and some workers.
We'll try to keep the cost at or around $30/month.
3.
3.1. IDEAS
3.1.3. tenex
3.1.4. mpk
3.1.5. cfg
3.1.6. obj
split out from rlib to separate package
- a purely OOP class library
3.1.7. lab
3.1.8. source categories
- need a way of extracting metadata from a repo
- need ability to search and query libs/packages
- separate modules based on where they belong in our stack?
- app
- lib
- script?
- dist
- software distros
3.1.9. generic query language
from obj protocol? sql compatibility?
check out kdb
3.1.10. bbdb
insidious Big Brother database.
- an application built with obj
- sql
3.1.11. NAS-TV nas t
- media streaming
- gstreamer backend
- audio/video
4.
4.1. DRAFT dylib-skel-1
- State "DRAFT" from
4.1.1. Overview
Our core languages are Rust and Lisp - this is the killer combo which will allow NAS-T to rapidly develop high-quality software. As such, it's crucial that these two very different languages (i.e. compilers) are able to interoperate seamlessly.
Some interop methods are easy to accomodate via the OS - such as IPC or data sharing, but others are a bit more difficult.
In this 2-part series we'll build a FFI bridge between Rust and Lisp, which is something that can be difficult, due to some complications with Rust and because this is not the most popular software stack (yet ;). This is an experiment and may not make it to our code-base, but it's definitely something worth adding to the toolbox in case we need it.
4.1.2. FFI
The level of interop we're after in this case is FFI.
Basically, calling Rust code from Lisp and vice-versa. There's an article about calling Rust from Common Lisp here which shows the basics and serves as a great starting point for those interested.
- Overhead
Using FFI involves some overhead. Check here for an example benchmark across a few languages. While building the NAS-T core, I'm very much aware of this, and will need a few sanity benchmarks to make sure the cost doesn't outweigh the benefit. In particular, I'm concerned about crossing multiple language barriers (Rust<->C<->Lisp).
4.1.3. Rust -> C -> Lisp
- Setup
For starters, I'm going to assume we all have Rust (via
rustup
) and Lisp (sbcl
only) installed on our GNU/Linux system (some tweaks needed for Darwin/Windows, not covered in this post).- Cargo
Create a new library crate. For this example we're focusing on a 'skeleton' for dynamic libraries only, so our experiment will be called
dylib-skel
or dysk for short.cargo init dysk --lib && cd dysk
A
src/lib.rs
will be generated for you. Go ahead and delete that. We're going to be making our ownlib.rs
file directly in the root directory (just to be cool).The next step is to edit your
Cargo.toml
file. Add these lines after the[package]
section and before[dependencies]
:[lib] crate-type = ["cdylib","rlib"] path = "lib.rs" [[bin]] name="dysk-test" path="test.rs"
This tells Rust to generate a shared C-compatible object with a
.so
extension which we can open using dlopen. - cbindgen
- install
Next, we want the
cbindgen
program which we'll use to generate header files for C/C++. This step isn't necessary at all, we just want it for further experimentation.cargo install --force cbindgen
We append the
cbindgen
crate as a build dependency to ourCargo.toml
like so:[build-dependencies] cbindgen = "0.24"
- cbindgen.toml
language = "C" autogen_warning = "/* Warning, this file is autogenerated by cbindgen. Don't modify this manually. */" include_version = true namespace = "dysk" cpp_compat = true after_includes = "#define DYSK_VERSION \"0.1.0\"" line_length = 88 tab_width = 2 documentation = true documentation_style = "c99" usize_is_size_t = true [cython] header = '"dysk.h"'
- build.rs
fn main() -> Result<(), cbindgen::Error> { if let Ok(b) = cbindgen::generate(std::env::var("CARGO_MANIFEST_DIR").unwrap()) { b.write_to_file("dysk.h"); Ok(())} else { panic!("failed to generate dysk.h from cbindgen.toml") } }
- install
- Cargo
- lib.rs
//! lib.rs --- dysk library use std::ffi::{c_char, c_int, CString}; #[no_mangle] pub extern "C" fn dysk_hello() -> *const c_char { CString::new("hello from rust").unwrap().into_raw()} #[no_mangle] pub extern "C" fn dysk_plus(a:c_int,b:c_int) -> c_int {a+b} #[no_mangle] pub extern "C" fn dysk_plus1(n:c_int) -> c_int {n+1}
- test.rs
//! test.rs --- dysk test fn main() { let mut i = 0u32; while i < 500000000 {i+=1; dysk::dysk_plus1(2 as core::ffi::c_int);}}
- compile
cargo build --release
- load from SBCL
(load-shared-object #P"target/release/libdysk.so") (define-alien-routine dysk-hello c-string) (define-alien-routine dysk-plus int (a int) (b int)) (define-alien-routine dysk-plus1 int (n int)) (dysk-hello) ;; => "hello from rust"
- benchmark
time target/release/dysk-test
(time (dotimes (_ 500000000) (dysk-plus1 2)))
5.
5.1. cl-dot examples
(defmethod cl-dot:graph-object-node ((graph (eql 'example)) (object cons)) (make-instance 'cl-dot:node :attributes '(:label "cell \\N" :shape :box))) (defmethod cl-dot:graph-object-points-to ((graph (eql 'example)) (object cons)) (list (car object) (make-instance 'cl-dot:attributed :object (cdr object) :attributes '(:weight 3)))) ;; Symbols (defmethod cl-dot:graph-object-node ((graph (eql 'example)) (object symbol)) (make-instance 'cl-dot:node :attributes `(:label ,object :shape :hexagon :style :filled :color :black :fillcolor "#ccccff"))) (let* ((data '(a b c #1=(b z) c d #1#)) (dgraph (cl-dot:generate-graph-from-roots 'example (list data) '(:rankdir "LR" :layout "twopi" :labelloc "t")))) (cl-dot:dot-graph dgraph "test-lr.svg" :format #+nil :x11 :svg))
(let* ((data '(a b)) (dgraph (cl-dot:generate-graph-from-roots 'example (list data) '(:rankdir "LR")))) (cl-dot:print-graph dgraph))
6.
6.1. global refs
need a way of indexing, referring to, and annotating objects such as URLs, docs, articles, source files, etc.
What is the best way to get this done?
6.2. On Computers
If you've met me in the past decade, you probably know that I am extremely passionate about computers. Let me first explain why.
On the most basic level computers are little (or big) machines that can be programmed to do things, or compute if we're being technical.1
They host and provide access to the Internet, which is a pretty big thing, but they do little things too like unlock your car door and tell your microwave to beep at you. They solve problems. Big or small.
They're also everywhere - which can be scary to think about, but ultimately helps propel us into the future.
There's something pretty cool about that - when you look at the essence of computation. There are endless quantities of these machines which follow the same basic rules and can be used to solve real problems.
6.2.1. The Programmer
Now, let us consider the programmer. They have power. real power. They understand the language of computers, can whisper to them in various dialects. It can be intimidating to witness until you realize how often the programmer says the wrong thing - a bug.
In reality, the programmer has a symbiotic relationship with computers. Good programmers understand this relationship well.
One day after I got my first job at a software company, I remember being on an all-hands meeting due to a client service outage. We had some management, our lead devs, product team, and one curious looking man who happened to be our lead IT consultant who had just joined. He was sitting up on a hotel bed, shirtless, vaping an e-cig, typing away in what I can only imagine was a shell prompt.
After several minutes he took a swig from a bottle of Coke and said "Node 6 is sick." then a few seconds later our services were restored. For the next hour on the call he explained what happened and why, but that particular phrase always stuck with me. He didn't say Node 6 was down, or had an expired cert - his diagnosis was that it was sick.
The more you work closely with computers, the more you start to think of them this way. You don't start screaming when the computer does the wrong thing, you figure out what's wrong and learn from it. With experience, you start to understand the different behaviors of the machines you work with. I like to call this Machine Empathy.
6.2.2. Programs
I already mentioned bugs - I write plenty of those, but usually I try to write programs. Programs to me are like poetry. I like to think they are for the computer too.
Just like computers, computer programs come in different shapes and sizes but in basic terms they are sets of instructions used to control a computer.
You can write programs to do anything - when I first started, my programs made music. The program was a means to an end. Over time, I started to see the program as something much more. I saw it as the music itself.
7.
8.
9.
11.
goals: make problems smaller.
sections: why lisp?
- doesn't need mentioning more and more
12.
12.1. TODO taobench demo
https://github.com/audreyccheng/taobench - shouldn't have missed this :) obviously we need to implement this using core – in demo/bench/tao?
12.2. TODO clap completion for nushell
12.3. Dataframe scripting
https://studioterabyte.nl/en/blog/polars-vs-pandas nushell supports DFs, polars underneath? https://www.nushell.sh/book/cheat_sheet.html
12.4. Cloud Squatting
12.4.1. Google
- Free Cloud Features
- 90-day $300 credits
- e2-micro - free hours worth 1 instance/month
12.4.2. Amazon
- AWS Free Tier
12.4.3. Akamai
- Linode Free Trial
12.4.4. Oracle
- OCI Free Tier
- always free: 2 x oracle autonomous DB
- 2 x AMD Compute VMs
- up to 4 x ARM Ampere A1 with 3k/cpu/hr and 18k/gb/h per month
- block/object/archive storage
- 30-day $300 credits
13.
13.1. trash as block device
in nushell there is option for rm command to always use 'trash' - AFAIK the current approach is via a service (trashd).
An interesting experiment would be to designate a block device as 'trash' - may be possible to remove reliance on a service
may be an opportunity for ublk driver to shine - instead of /dev/null piping we need a driver for streaming a file to /dev/trash
13.2. compute power
- mostly x86_64 machines - currently 2 AWS EC2 instances, some podman containers, and our home beowulf server:
- beowulf:
- Zor
- mid-size tower enclosed (Linux/Windows)
- CPU
- Intel Core i7-6700K
- 4 @ 4.0
- GPU
- NVIDIA GeForce GTX 1060
- 6GB
- Storage
- Samsung SSD 850: 232.9GB
- Samsung SSD 850: 465.76GB
- ST2000DM001-1ER1: 1.82TB
- WDC WD80EAZZ-00B: 7.28TB
- PSSD T7 Shield: 3.64TB
- My Passport 0820: 1.36TB
- RAM
- 16GB (2*8) [64GB max]
- DDR4
- Jekyll
- MacBook Pro 2019 (MacOS/Darwin)
- CPU
- Intel
- 8 @
- RAM
- 32G DDR4
- Hyde
- Thinkpad
- CPU
- Intel
- 4 @
- RAM
- 24G DDR3
- Boris
- Pinephone Pro
- CPU
- 64-bit 6-core 4x ARM Cortex A53 + 2x ARM Cortex A72
- GPU
- Mali T860MP4
- RAM
- 4GB LPDDR4
- pi
- Raspberry Pi 4 Model B
- CPU
- Cortex-A72 (ARM v8) 64-bit SoC
- 4 @ 1.8GHz
- RAM
- 8 GB
- DDR4 4200
- Zor
14.
14.1. BigBenches
let ms = '1trc/measurements-0.parquet' dfr open $ms | dfr group-by station | dfr agg [ (dfr col measure | dfr min | dfr as "min") (dfr col measure | dfr max | dfr as "max") (dfr col measure | dfr sum | dfr as "sum") (dfr col measure | dfr count | dfr as "count") ]
15.
16.
16.1. TODO collect more data
weather - music - etc
17.
17.1. On blocks and devices
/dev In Linux, everything is a file.
dev contains special device files - usually block or character device.
major, minor = category, device 0, 5
mknod - create special device files
17.2. save-lisp-and-respawn
sb-ext:*save-hooks*
17.3. syslog for log
sb-posix:
- openlog syslog closelog
- levels: emerg alert crit err warning notice info debug
- setlogmask
18.
19.
19.1. DB Benchmarking
19.2. packy design
- API root: https://packy.compiler.company
- source packs: https://vc.compiler.company/packy
19.2.1. Lib
- Types
- Pack
Primary data type of the library - typically represents a compressed archive, metadata, and ops.
- Bundle
Collection data type, usually contains a set of packs with metadata.
- PackyEndpoint
Represents a Packy instance bound to a UDP socket
- PackyEndpointConfig
Global endpoint configuration object
- PackyClientConfig
Configuration for outgoing packy connections on an endpoint
- PackyServerConfig
Configuration for incoming packy connection son an endpoint
- PackyConnection
Packy connection object
- Pack
- Traits
20.
20.1. TBD investigate alieneval for phash opps
21.
21.1. How it works
The backend services are written in Rust and controlled by a simple messaging protocol. Services provide common runtime capabilities known as the service protocol but are specialized on a unique service type which may in turn register their own custom protocols (via core).
Services are capable of dispatching data directly to clients, or storing data in the database (sqlite, postgres, mysql).
The frontend clients are pre-dominantly written in Common Lisp and come in many shapes and sizes. There is a cli-client, web-client (CLOG), docker-client (archlinux, stumpwm, McCLIM), and native-client which also compiles to WASM (slint-rs).
21.2. Guide
21.2.1. Build
install dependencies
./tools/deps.sh
- make executables
Simply runmake build
. Read themakefile
and change the options as needed. - Mode (debug, release)
- Lisp (sbcl, cmucl, ccl)
- Config (default.cfg)
21.2.2. Run
./demo -i
21.2.3. Config
Configs can be specified in JSON, TOML, RON, or of course SEXP. See
default.cfg
for an example.
21.2.4. Play
The high-level user interface is presented as a multi-modal GUI application which adapts to the specific application instances below.
- Weather
This backend retrieves weather data using the NWS API.
- Stocks
The 'Stocks' backend features a stock ticker with real-time analysis capabilities.
- Bench
This is a benchmark backend for testing the capabilities of our demo. It spins up some mock services and allows fine-grained control of input/throughput.
23.
23.1. alpine builders
- make sure to apk add:
- git, hg
- clang
- make
- linux-headers
- zstd-dev
- libc-dev?
24.
24.1. bookmarks
- How should such objects be represented within CORE?
- skel/homer mostly
- already have alias
- not sure about obj/otherwise, prob not
25.
26.
26.1. keys.compiler.company
- public openpgp server
- keys.compiler.company
- https://keys.openpgp.org/
- packy/hagrid
27.
- from core/readme.org - bit too verbose
27.1. Bootstrap
To bootstrap the core you will need recent versions of Rust, SBCL, and a C compiler (clang or gcc). Only Unix systems are explicitly supported.
Many parts of the core depend on additional libraries which may or may not be provided by your system's package manager. See the dependency matrix below for details.
In any case, it is always preferred to make use of the infra project to reliably provision the host either from source or pre-built platform-specific binary distributions.
27.2. Build
The Core consists of two major system: the lisp system and the rust system. There is also an auxiliary emacs system containing a complete Emacs IDE configuration which serves as the base for user customizations.
Building the core will produce its output to the .stash
directory by
default. You can then test, run, and install the resulting files or
package them up to be shipped elsewhere.
27.2.1. From Source
- Lisp
The Lisp Core can be found under the
lisp
directory. It is the largest system, most actively developed, and is intended to cover the complete surface of the user-facing APIs contained in the core.The core is self-hosted in the sense that it is intended to be built from one of its own programs - the
skel
project compiler. You may also load any part of the core individually as long as you have SBCL and Quicklisp installed. - Rust
Today, the Rust components of the core are quite small and isolated. We like Rust right now for the reasonable memory safety guarantees, as an interface to (W)GPU, WASM, LLVM, etc, and because it has an industry-sponsored ecosystem (guaranteed future).
Our Rust code is far less concerned with being completely from scratch - dependencies are imported freely and at will - adapting to whatever FOTM is hot right now.
A workspace is configured such that you can build all components with the following command (
NOTE
- takes a long time):cd rust && cargo build
- Emacs
The core contains a collection of Emacs Lisp libraries under
emacs
which may be installed for the current user using the corresponding Makefile.
Footnotes:
… perform computations