Reset git tree

This commit is contained in:
Cyryl Płotnicki 2022-06-25 08:47:06 +01:00
commit 40545022f6
431 changed files with 4686 additions and 0 deletions

34
.build.yml Normal file
View file

@ -0,0 +1,34 @@
image: nixos/latest
sources:
- https://git.sr.ht/~cyplo/blog
secrets:
- 6c23a8c1-7a30-4a7e-b3b7-0171e898a7d3
- 47d167ed-ef43-4515-9923-6954d34a4fab
oauth: pages.sr.ht/PAGES:RW
tasks:
- fetch_tools: |
cd blog
nix-shell --pure --run 'date'
- github_push: |
set +xv
cd blog
GITHUB_TOKEN=`cat ~/.github_token`
git remote add github https://cyplo:$GITHUB_TOKEN@github.com/cyplo/blog.git
nix-shell --pure --run 'git push github --all --force'
- build: |
cd blog
nix-shell --pure --run 'hugo --gc --minify'
- netlify_publish: |
set +xv
cd blog
export NETLIFY_TOKEN=`cat ~/.netlify_token`
if [ `git branch --contains HEAD | grep -i master` ]; then nix-shell --pure --keep NETLIFY_TOKEN --run 'netlify-deployer master 7a0944f0-cfd0-424b-8252-ca44fc950350 public'; fi
- package: |
tar -C blog/public -cv . > site.tar
gzip --best site.tar
ls -hal site.tar.gz
- sourcehut_upload: |
acurl -i -f https://pages.sr.ht/publish/cyplo.srht.site -Fcontent=@site.tar.gz

1
.envrc Normal file
View file

@ -0,0 +1 @@
eval "$(lorri direnv)"

3
.github/FUNDING.yml vendored Normal file
View file

@ -0,0 +1,3 @@
# These are supported funding model platforms
liberapay: cyplo
patreon: cyplo

2
.gitignore vendored Normal file
View file

@ -0,0 +1,2 @@
public/
resources/_gen/

10
.gitlab-ci.yml Normal file
View file

@ -0,0 +1,10 @@
image: monachus/hugo
variables:
GIT_SUBMODULE_STRATEGY: recursive
pages:
tags:
- docker
script:
- hugo

3
.gitmodules vendored Normal file
View file

@ -0,0 +1,3 @@
[submodule "themes/beautifulhugo"]
path = themes/beautifulhugo
url = https://github.com/halogenica/beautifulhugo.git

104
LICENSE Normal file
View file

@ -0,0 +1,104 @@
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.
Section 1 Definitions.
Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.
Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License.
BY-NC-SA Compatible License means a license listed at creativecommons.org/compatiblelicenses, approved by Creative Commons as essentially the equivalent of this Public License.
Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.
Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.
Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.
License Elements means the license attributes listed in the name of a Creative Commons Public License. The License Elements of this Public License are Attribution, NonCommercial, and ShareAlike.
Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License.
Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.
Licensor means the individual(s) or entity(ies) granting rights under this Public License.
NonCommercial means not primarily intended for or directed towards commercial advantage or monetary compensation. For purposes of this Public License, the exchange of the Licensed Material for other material subject to Copyright and Similar Rights by digital file-sharing or similar means is NonCommercial provided there is no payment of monetary compensation in connection with the exchange.
Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.
Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.
You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.
Section 2 Scope.
License grant.
Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to:
reproduce and Share the Licensed Material, in whole or in part, for NonCommercial purposes only; and
produce, reproduce, and Share Adapted Material for NonCommercial purposes only.
Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.
Term. The term of this Public License is specified in Section 6(a).
Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material.
Downstream recipients.
Offer from the Licensor Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License.
Additional offer from the Licensor Adapted Material. Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapters License You apply.
No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.
No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i).
Other rights.
Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.
Patent and trademark rights are not licensed under this Public License.
To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties, including when the Licensed Material is used other than for NonCommercial purposes.
Section 3 License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the following conditions.
Attribution.
If You Share the Licensed Material (including in modified form), You must:
retain the following if it is supplied by the Licensor with the Licensed Material:
identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated);
a copyright notice;
a notice that refers to this Public License;
a notice that refers to the disclaimer of warranties;
a URI or hyperlink to the Licensed Material to the extent reasonably practicable;
indicate if You modified the Licensed Material and retain an indication of any previous modifications; and
indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License.
You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.
If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable.
ShareAlike.
In addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply.
The Adapters License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-NC-SA Compatible License.
You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material.
You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, Adapted Material that restrict exercise of the rights granted under the Adapter's License You apply.
Section 4 Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:
for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database for NonCommercial purposes only;
if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material, including for purposes of Section 3(b); and
You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.
Section 5 Disclaimer of Warranties and Limitation of Liability.
Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.
To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You.
The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.
Section 6 Term and Termination.
This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.
Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates:
automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or
upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License.
For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.
Sections 1, 5, 6, 7, and 8 survive termination of this Public License.
Section 7 Other Terms and Conditions.
The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.
Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.
Section 8 Interpretation.
For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.
To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.
No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.
Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.

6
archetypes/default.md Normal file
View file

@ -0,0 +1,6 @@
---
title: "{{ replace .Name "-" " " | title }}"
date: {{ .Date }}
draft: true
---

61
config.toml Normal file
View file

@ -0,0 +1,61 @@
baseurl = "https://blog.cyplo.dev/"
languageCode = "en-gb"
theme = "beautifulhugo"
paginate = 7
pygmentsUseClasses=true
pygmentsCodeFences=true
title = "Adventurous Computing"
[taxonomies]
series = "series"
tags = "tags"
[[params.bigimg]]
src = "images/header.jpg"
position = "center top"
[params]
homeTitle = "Adventurous Computing by Cyryl Płotnicki"
keywords = "programming, testing, tdd, rust, golang"
logo = "images/avatar.jpg"
showRelatedPosts = true
rss = true
[[menu.main]]
identifier = "donate"
name = " donate"
url = "https://liberapay.com/cyplo/"
[[menu.main]]
identifier = "blog"
name = "blog"
url = "/posts"
[[menu.main]]
identifier = "about"
name = "about"
url = "/pages/about"
[[menu.main]]
identifier = "projects"
name = "projects"
url = "/pages/projects"
[[menu.main]]
identifier = "talks"
name = "talks"
url = "https://slides.cyplo.dev/"
[Author]
name = "Cyryl Płotnicki"
website = "https://blog.cyplo.dev/"
email = "blog@cyplo.dev"
github = "cyplo"
gitlab = "cyplo"
twitter = "cyplo"
linkedin = "cyplo"
mastodon = "todon.nl/@cyplo"

111
content/pages/about.md Normal file
View file

@ -0,0 +1,111 @@
---
title: "Heya !"
date: 2019-04-08T12:51:40+01:00
draft: false
---
My name is Cyryl and I hope you're having a great day !
## Interesting random tidbits about me
I am [running](https://blog.cyplo.net/posts/2018/05/pair-with-me-on-rust.html) remote pairing sessions, helping people with Rust.
My recent focus is [adding fuzzer](https://twitter.com/cyplo/status/1094606119421521922) infrastructure to their projects.
I started a [blog post series](https://blog.cyplo.net/series/rust-testing-tricks/) on testing in [Rust](https://blog.cyplo.net/categories/rust.html) where I share the tricks I've learned.
I was in both leadership and individual contributor roles, I have respect for all sides involved in the process.
I like testing, I taught various aspects of TDD in an apprenticeship course.
## Day jobs
**Head of Core Engineering** at Form3, _May 2019 - now_
I am responsible for 3 engineering teams, each having its own technical leadership structure. I work closely with product leadership on the vision for the shared services at the organisation and with other team leads and section heads on execution planning. I help with establishing standards across multiple teams while providing architectural and technical support. I review the more sensitive code & documentation contributions. I guide people through their growth as team leaders and individual developers alike.
**Software Engineer** at balena, _October 2018 - May 2019_
I worked remotely on **Rust** crates compiled into NPM packages through the power of **WASM**.
As all of the above is on the bleeding edge - I find, report and fix bugs throughout the ecosystem.
**Software Craftsman** (Consultant) at Codurance, _January 2017 - October 2018_
Helping client teams grow - working within software teams and teaching technical concepts as well as improving the process.
Running workshops (e.g. Legacy Code Retreat, advanced git, **TDD**).
Running **Rust** pairing sessions and intro to Rust workshops.
Facilitating open-space conferences.
Teaching at the apprenticeship programme.
Projects:
- Transportation route planning software for a big retailer, using a graph database (**Gremlin/Apache Tinkerpop**) in Java. Managed deployment to **AWS** via **Terraform**.
- Transaction consolidation software for an investment bank.
- Electronic circuit simulation software for a startup.
**Consultant** at Infusion, _June 2012 - November 2016_
Writing code is still where I spend most of my time, and happily so.
I do that in **Java** and **.Net**, sprinkled with some **C**, **C++**, Python, bash and powershell I gather requirements, talk to the clients, other engineers, figure out solutions together. My particular specialty seems to be taking over legacy code bases and making them shine. I am responsible for a small team of developers.
Projects:
- Pulse - a product for mass remote software installation and monitoring. Used, among others, in demo kiosk installations, phone retail shops and in-flight entertainment systems. Writing server and client code. Largish system working mostly on the backend in .Net, standing on Azure. I'm helping here and there, with the architecture and sometimes doing changes to our **Android** client code as well.
- I've helped build a **distributed** solution for insurance cost calculation for Metlife Japan. Mostly Java, some IBM mainframes and occasional C++.
- I've worked for other financial institutions on distributed software for risk exposure analysis. Mostly .Net, some C++, large in-house computation farms involved.
**Senior Software Engineer**, Power Media S.A., _February 2011 - June 2012_
I was asked to return to Power Media by the head of the whole department, this was quite cool ! I've helped recruit more people and was one of the developers working on the system for TV materials storage. The system was written in **C#**, but I've also took a look inside an **Erlang** one and a one in **Obj-c** as well, from time to time. I've convinced the company to [open source](https://github.com/powermedia/PowerMedia.Common) the core libraries used.
**Mobile Games Developer**, iNTRES/Scoreloop, _December 2009 - February 2011_
I was a part of a team that was tasked with developing a cross-platform, pure C library that provided matchmaking, ladders and other networking capabilities to mobile games, **Pro Evolution Soccer 2011** among others. We've used C89 as the base language.
**Lead Developer**, Power Media S.A., _June 2007 - December 2009_
This was my first experience as a leader. Despite this we've managed to build the new [travelplanet.pl](http://www.travelplanet.pl/) - a website for the biggest trip broker in Poland. This was a .Net job and involved some quite heavy server machinery. Around **60 different data sources** were involved. In the meantime I've also helped with growing the **Erlang** team and got my hands dirty with proof-of-concept **low-level** Mac **PowerPC** programming.
**Software Developer**, Power Media S.A., _June 2006 - June 2007_
My first "real" job ! Maintenance on the in-house systems written in Java. Slowly I got into client-facing project as well. Java, Webobjects.
## Education
**Wrocław University of Technology** 2005-2007
## Books
I am one of the technical reviewers for the "500 lines or less" book, a new entry in the [Architecture of Open Source Applications](http://aosabook.org/en/index.html) books series.
I helped to shape the chapter on Paxos there.
## Talks/workshops - ask me to give some !
The links provided are for the slide decks.
- Working with legacy code utilizing golden master tests
- Fuzzing in Rust
- Mastering Git
- Legacy Code Retreat - running a full-day workshop on test-driving changes in legacy code.
- A secret to secure software (ssshhh it's TDD)
- ["Conferences you did not know existed and now you know you just need to go to"](https://blog.cyplo.net/posts/2016/03/13/conferences.html)
- "Languages are communities" - impromptu talk about how communities are the most important thing when choosing your programming language.
- ["Tor and what it's good for"](https://github.com/cyplo/talks/tree/master/tor_for_beginners)
- ["Erlang for the .Net people"](https://github.com/erlang-wroclaw/erlang_tech_night) - introduction to Erlang with some live coding by me and my friend Fabian
- ["Anatomy of a debugger"](https://prezi.com/hzvnsznfkgwj/anatomy-of-the-debugger/) - very high-level overview of how debugging works
- SMD soldering
- Start your journey with 3D printing
## More to read
I like building tools. Be they physical or software. Something that lasts.
I like teaching - allows me to understand topics in more depth.
I'm good at understanding whole worlds of interactions within software projects.
The people and their needs and then the system architecture, code itself, its execution. Because of that I'm sometimes slow to comprehend new things. I really like trying new things. I like **Rust** quite a bunch and am trying to spread that love - I am a contributor to the core Rust projects. I run my own infrastructure - email, blog, backups.
I ask a lot of questions. I like to be in the understanding of the value of my work. I think there's a big value in sharing. I am really into the open source movement. I like **helping people**. I like electronics. I've assembled a RepRap. I've built a sun-powered backup server. I'm learning PCB design in **KiCad** and CAD via Fusion360. I like debugging. Electronics and software. I have a **CNC** router. I like board and RPG games. Rock climbing. Cycling. Other life activities.
Curious reader might want to follow my blog at [blog.cyplo.net](https://blog.cyplo.net/)
[Email me](mailto:hello@cyplo.net) at hello@cyplo.net - I'm happy to have a chat with you !

View file

@ -0,0 +1,9 @@
---
title: "Projects"
date: 2019-04-08T12:51:40+01:00
---
- [genpass](https://git.sr.ht/~cyplo/genpass) - A simple yet robust commandline random password generator written in Rust. Has an extensive generative test suite, including tests against a Rust port of Dropbox's password strength tester zxcvbn.
- [netlify deployer](https://pypi.org/project/netlify-deployer/) - A Python CLI tool to deploy a website to Netlify using their incremental API client. Not the best code but works like a charm. This blog [uses](https://git.sr.ht/~cyplo/blog/tree/master/item/.build.yml#L21) it.
- [my dotfiles](https://sr.ht/~cyplo/dotfiles/) - all of the settings for all of my machines, automated using Nix
- [bakare](https://sr.ht/~cyplo/bakare/) - work in progress on writing a modern backup solution in Rust

View file

@ -0,0 +1,23 @@
---
title: Google AI Challenge
date: 2010-10-10 20:37:09
tags: [dotnet, ai, google, mono]
category: artificial intelligence
---
It seems that being so much into other programming things I missed the
[Google AI Contest](http://ai-contest.com/) announcement. It's
supercool that they sponsor such challenges but also it's really uncool
[so like warm ?] that apparently I won't have time to participate.
About the challenge itself - it involves working with
[GalconFusion](http://www.galcon.com/fusion/)-like game. Easy to
learn, difficult to master type. As they provide starter packs with a
sample bot it's easy to start. However, getting some decent place would
involve some really long time of good old hard work.
As a side note on .NET being spread around the unix world,  there's a C#
starter pack. Good. After few secs I realised that the code will be run
under antique [Mono 1.2.6](http://www.go-mono.com/archive/1.2.6/),
therefore none of my young C# friends would be there. No type inference,
no linq, no other fancy stuff.  Not good.

View file

@ -0,0 +1,29 @@
---
title: It's raining bullets
date: 2010-10-15 13:12:13
category: games
---
I have always been fascinated by works of Adrian Chmielarz, from the
times of Metropolis Software on. Naturally then, I am following really
closely on the development of
[Bulletstorm](http://www.bulletstorm.com/). Damn, I even hoped to
talk to The Guy or Mieszko Zielinski at the
[gameaiconf](http://gameaiconf.com/>) in Paris, but
[Alex](http://aigamedev.com/) told me that "whoa,
[PCF](http://www.peoplecanfly.com/) are so busy with the new IP that
they didn't have time to come this year, next year they'll be here,
hopefully".
Having something so deeply embedded in your mind that you're always
thinking of it. While you eat, while you talk, in your sleep. That's the
way I do like to work. Reading about making of Bulletstorm gives me
creeps as I do feel their hyperenthusiasm about the thing, I do feel
that they take this game everywhere with them. To dinner, breakfast,
party. That's the way I do like to work.
Now back on track, an interview which triggered this urge to flush my
thoughts here. [Interview with Tanya Jessen](http://techland.com/2010/10/14/origins-tanya-jessen-lead-producer-on-bulletstorm/),
producer at Epic.
People make games so other people can have fun. How selfless.

View file

@ -0,0 +1,47 @@
---
title: VPS adventures part one
date: 2010-10-24 23:40:32
tags: [dmehosting, server, ubuntu, virtualization, vps]
category: server
---
As every person in the software industry I need ways to promote my humble
self. A decent web page and online resume plus a blog maybe are a must
these days. The question remains where to put them.
Previously I had my site published within one of the polish hosting
companies for free, as my friend was involved in its operations. Thanks
[Kajetan](http://www.kajetanwandowicz.com/) for 5 years of support !
And maybe it'd stay that way if not of that desire of mine to tinker and
have control of every aspect of the technology power. Some platform with
ability to boot the system I want up would be appreciated.
Real servers are good when you have a place to put them. And want to pay
electricity bills, provide UPS, KVM, BGP and other three-letter
abbreviations. And oh, I simply don't like the fan noise
anymore. [VPS](http://en.wikipedia.org/wiki/Virtual_private_server)
then it is.
Since I recently made contact with Ubuntu Server edition and liked it
for its simplicity, I started searching for a cheap VPS which supports
the newest Ubuntu. Two googles later I found
[dmehosting.com](http://www.dmehosting.com/). 6$ for 25Gigs of space
and 256MB RAM seemed ok, so I bougth the VPS1 plan. Payment went without
problems, they support PayPal. With 6$ less on my account I was waiting
for them to give me the IP + login & password. I didn't expect that I
would take the whole day long.
My first contact with the machine was that of apt-get update, which
failed. Because of lack of network connectivity. I was logged by ssh to
that machine, so definitely some sort of connectivity had to be in
place. I dug into and found not working DNS servers, so I made VPS
connect to the other ones and everything started working. I jumped into
their 'live' tech support line just to hear that it was really bad of me
to change the resolv.conf and I just shouldn't do that. In the meanwhile
their DNSes went back so I in fact did revert the resolv.conf after all.
It's [OpenVZ](http://wiki.openvz.org/Main_Page>)-based hosting, so
policy of "no-no's" is pretty much embedded in the very system. No
kernel reinstallation. No swap space. No system clock write access. No
clicking too fast in the administration panel. Back to google then my search for VPS
is. Stay tuned for the next part.

View file

@ -0,0 +1,39 @@
---
title: VPS Adventures part two
date: 2010-11-01 14:23:18
tags: [server, tilaa, ubuntu, virtualization, vps]
category: server
---
After some not so pleasurable experiences with the previous VPS hosting
provider I started my search for some other one. This time not only
google was involved but also [aardvark](http://vark.com/) and some of
the "top XX best hosting providers" sites. Two results kept showing up.
[server4you](http://www.server4you.com/) and
[tilaa](https://www.tilaa.nl/). Which one to choose then ? I wanted
to give both of them a chance.
[server4you](http://www.server4you.com/). I really tried to buy a VPS
there and I failed. Lots of forms to fill and no instant money
transfer available. I entered my credit card details only to receive the
email after a day or so stating that they were unable to process my
payment.
[tilaa](https://www.tilaa.nl/). What caught my attention there was
that clean website of them. I like such a design style, as you can also
see it here. Not so cheap, I must admit, but definitely easy to buy.
Payment transfered and after about a minute I had a full access to the
VPS. That's how I like things to work. I am using tilaa's the most
simple VPS plan since then and I am pretty happy with it. Here gathered
are some things I like or don't about its services.
- friendly, responsive staff
- 2x RAM amount of swap space - seems great compared to no swap on other VPSes
- [KVM](http://www.linux-kvm.org/page/Main_Page)- and [qemu](http://en.wikipedia.org/wiki/QEMU)-based, you get full control of your system
- good overall system responsiveness - I experienced occasional lags when using [OpenVZ](http://wiki.openvz.org/Main_Page) virtualization before, there's no such thing here
- they work fast - the day the new Ubuntu 10.10 was released, they had the VPS image ready !
- their motto "We try to be good open source citizens" just convinces me
- the only disadvantage is the price, it's 8.95 euro net montly, it's also easy to miss that info on the site
As I already have a safe place to tinker with my system the real work on
the server configuration begins. Stay tuned for the next episode.

View file

@ -0,0 +1,16 @@
---
title: Hello Sharpdevelop
date: 2010-11-14 13:38:02
category: open source
---
For years I've been involved in the open source community. I encouraged
company managers to use open source solutions, I reported bugs, I did
tests. Since this Monday I am officially a part of
[Sharpdevelop](http://www.icsharpcode.net/opensource/sd/) team,
pursuing the goal of better
[F#](http://en.wikipedia.org/wiki/F_Sharp_(programming_language))
support within the IDE. The project itself made has made a impressive
progress last years, had [MonoDevelop](http://monodevelop.com/)
forked from it and has some pretty good people working on it right now.
What more can I say.

View file

@ -0,0 +1,35 @@
---
title: Netbeans 6.9 for Rails 3 on Ubuntu 10.10
date: 2011-01-29 14:51:07
tags: [linux, maverick, netbeans, ruby, ubuntu]
category: ruby
---
I am a lazy person. I do like things to just work, run out of the box,
name it. I like Ubuntu for it's packaging system and ease of
installation of various software. However using Netbeans from the
default repo could cause you a headache when developing Rails 3 apps.
These just won't run. Let us start with installing the newest version of
the IDE
```bash
sudo apt-get install netbeans
#then go to the Tools->Plugins->Available and install all regarding Ruby
```
Now go ahead and try running some Rails app.
```bash
/var/lib/gems/1.9.1/gems/activesupport-3.0.3/lib/active_support/dependencies.rb:239:in `require': /var/lib/gems/1.9.1/gems/activesupport-3.0.3/lib/active_support/cache/mem_cache_store.rb:32: invalid multibyte escape: /[x00-x20%x7F-xFF]/ (SyntaxError)
```
you say ? Here's a quick fix
```
#edit /usr/share/netbeans/6.9/etc/netbeans.conf
#=> append -J-Druby.no.kcode=true to the the netbeans_default_options and volia
# the whole line im my case goes like that:
netbeans_default_options="-J-client -J-Xss2m -J-Xms32m -J-XX:PermSize=32m -J-XX:MaxPermSize=200m -J-Dapple.laf.useScreenMenuBar=true -J-Dsun.java2d.noddraw=true -J-Dsun.java2d.pmoffscreen=false -J-Druby.no.kcode=true"
```
Long term solution ? Wait for Netbeans 7.0 as the devs promised it to be fixed there

View file

@ -0,0 +1,28 @@
---
title: How not to blog
date: 2011-03-27 22:25:23
category: blog
---
Today's post is definitely not a tech one. Or maybe just a bit. It's
about blogging, from the other side of the keyboard. PEBKAC-style.
Fun-fact: Did you know that 'kac' in Polish means 'hangover' ? To the
point. I realised that I have dozens of posts in the state of started
but definitely not finished. Why is that so ? Is it that I am superbusy
with my new work ? Kinda. Is that that I am the guy responsible for the
installation of the sound system in one of the new Wrocław's clubs ?
Sounds pretty serious but still not a perfect excuse. The thing is that
there is no right excuse for not doing something you want to do. Do it
or don't. Don't pretend to. The funny thing is that the smallest
possible posts, like
[this one](/posts/2011/01/29/netbeans-6-9-rails-3-ubuntu-10-10/) brought me
the most views. I don't wanna loose you guys because I don't feel like
blogging recently. Mine unfinished tech posts won't probably make it
because I just don't remember what were these small tech tricks I used
to solve that problems. Not even wrote them down as I was so enlightened
by the superiority of my solutions. Don't do that. Take notes about
everything which you would possibly need and start working on the
sentences and wording. Right now. Go. Hit publish and let the world
know. Thanks go to
[TC](http://techcrunch.com/2011/03/27/why-startups-need-to-blog-and-what-to-talk-about/)
& [Mark Suster](http://techcrunch.com/author/marksuster/).

View file

@ -0,0 +1,43 @@
---
title: Installing MacOSX Server 10.6 on VirtualBox
date: 2011-05-07 14:23:30
tags: [macosx, virtualbox]
categories: [freedom, mac, server]
---
It all started with my work assignment of installing MacOSX Server 10.6
on the XServe rack. Briefly speaking, after debugging some hardware problem
with the CPU temperature meter, I managed to install the base OSX server
system there. After that I needed to set up VMs with 10.6 64bit and 10.4 32bit
on top of that. Why so if already having 10.6 as the base system ?
Simply I wanted to avoid constant reinstallation of the base system, as
the machines are meant to be used for development and testing. Also the
XServe hardware was needed instead of classic blade system to meet Apple
license requirements. Having the internet read twice I decided to give
VirtualBox a try. My way of thinking was that I am already familliar
with that piece of software and know it as a easy to use one. In
addition to being pretty fool-proof it also enables features like operating from the commandline only. So what's the deal ?
Bring up VirtualBox GUI, click on new virtual machine creation button,
choose MacOSXServer, insert the iso file made from the installation dvd
and volia ? Not even remotely close. It appears that Apple has embedded
some kind of valid processors db into the system and checks the
processors present against it on every boot. The only configuration I managed to install
and launch was single-core. Both 32 and 64bit guests seem to be working
fine. But what's the point of having many cores there if only one can be
used per a VM ? Some time after that I found brilliant blog pair by
[prasys](http://prasys.info/) and
[nawcom](http://blog.nawcom.com/). It seems that the latter one
produces what's called [ModCD](http://blog.nawcom.com/?p=446) which
allows booting with the processor checks disabled. Recipe for amd64:
- create new MacOSX vm in VirtualBox
- change it to non-EFI type
- boot ModCD
- swap iso image for the one with the MacOS install
- press f5
- type -force64
- press enter
For me the installer crashed on the last step before reboot, but then it
rebooted successfully and continued on with the installation. Don't
forget to donate to nawcom if you can.

View file

@ -0,0 +1,31 @@
---
title: Speeding up Eclipse/Aptana
date: 2011-05-23 12:29:49
tags: [aptana, eclipse, java]
category: open source
---
As you know Aptana is Eclipse. And Eclipse is a Java-based IDE
which means it's not particularly a speedy one. However if you have a
decent amount of ram, like 4gb, it's fairly easy to speed the Eclipse
up. Find `eclipse.ini` or `aptana.ini` or other file which
contents look similar. These settings are from my Linux box - I know that
MacOS can get scared by higher values there; if you encounter any problems - try lowering the `Xmx`
and/or others.
```
--launcher.XXMaxPermSize
512m
--launcher.defaultAction
openFile
-vmargs
-Xms128m
-Xmx2048m
```
These settings are for the 4gb ram box, try to find the ones which suit you best. `MaxPermSize`
stands for the maximum amount of the memory to be used by the Java
internals, `Xms` gives the amount of heap allocated on the VM start and
`Xmx` is the heap size limit. Start by upping `Xms` value as it's often
too small which causes the Java VM to make lots of heap resizes on the
app start.

View file

@ -0,0 +1,48 @@
---
title: How to hire a programmer, a.k.a. tech recruitment explained
date: 2011-07-24 20:31:55
tags: [recruitment]
category: hopefully influential
---
Todays post is not a recipe at all. Heres what I think the tech recruitment should look
like. It wont be cheap. Start by describing the position. Who are you looking for,
what would they do for you, how would they bring money to your company.
Use words, not techie talk. I found most of the job descriptions online
do not match the real expectations of the employer. You just want a good
programmer, a person who thinks about what they does. It really doesnt
matter if they know Haskell and you write in Erlang. I doesnt matter if
they think in asm when youre looking for Forth. I doesnt matter if
they are Linux nerds and youre Windows-only, if they know how
computers work. As long as they think about what they do and agree with your goals, thats perfectly fine. If you
translate your desire for a good programmer into a bulletpoint list of
tech words you dehumanize it. Dont look for skillsets, look for people. How to check whether someone is a good programmer then ? How to
score people against such a vague job description ? You dont score
people, you just sort the set of people. Start by a rather loose
conversation. Tell me about the point in life youre in, tell me
something about what fascinates you in the programming and finally
whats your motivation for changing the job and why do you want to work
with us ? Somewhere in the middle I ask about the projects, what
technologies you do like, what not, why not. Do you feel comfortable
teaching others ? If someone is fascinated by
The Machines you're noticing it by now. Same with
boring people or annoying people. If you do not want to work with them
then do not hire them. Even if theyre brilliant. How does the computer
work then ? How does this happen that I have this text file full of
strange writings and suddenly they make the computer do stuff ? Why its
not a good idea to get the modern pc to fly a space shuttle ? Whats a
stack and whats a heap ? Is the stack memory different from the heap
memory ? Why so. And the why. The most important question for me is the
mighty why. Not only should they understand how something works, but
more importantly why it works like that. I enjoy working with people who
are microwave-curious than the hex-feared ones. How does the microwave
oven heat things ? Does the DEADBEEF scare your ? Talk with people, make notes. What annoys you, what
impresses you. Do the exercises with them. Ask them how would they solve
a simple but a reallife problem. Design the solution with them. The more
lifelike the work scenario the better. Do the things youd normally do
with your work colleague. And order the set of candidates. Work as the
simple max function. Decide
which one seems better suited for you. Then take the next one and
compare her to the current max. And please hire people only when youre
ready to hire them. UPDATE. After writing this post I've stumbled onto
this [37signal's article](http://37signals.com/svn/posts/2628-how-to-hire-a-programmer-when-youre-not-a-programmer)

View file

@ -0,0 +1,20 @@
---
title: Ubuntu 11.10 Beta1
date: 2011-09-22 09:40:22
tags: [ubuntu]
category: linux
---
What do I think ? Easy to install, hard to disable Unity, some programs crash sometimes and some things are weird. Despite that I have the faith in the Ubuntu folks that the system would be ready when 11.10 comes.
As whining itself never got anybody anywhere, I've reported all of these here:
[#855901](https://bugs.launchpad.net/ubuntu/+source/linux/+bug/855901)
[#834425](https://bugs.launchpad.net/ubuntu/+source/software-center/+bug/834425)
[#855945](https://bugs.launchpad.net/ubuntu/+source/unity-2d/+bug/855945)
[#855917](https://bugs.launchpad.net/ubuntu/+source/empathy/+bug/855917)
[#855919](https://bugs.launchpad.net/ubuntu/+source/thunderbird/+bug/855919)
I've even got one OOPS. However, since I'm writing this post under
11.10, it seems usable, even the OOPS was not of disturbance as it got
nice GUI window displayed and no system crash then.

View file

@ -0,0 +1,34 @@
---
title: Configuring Tor non-exit relay
date: 2011-10-06 15:33:49
tags: [tor]
---
I'm for privacy. I'm for cryptography. I do provide
[Tor](https://www.torproject.org/) relays for the good of all people.
Here's how to configure a classic non-exit Tor relay on your machine. In
my case I got it running on the VPS server, with 2 IP addresses and some
bandwidth quota applied. I wanted the Tor traffic to be easily
recognizable from outside as different from the 'normal' traffic coming
from my server. Second requirement was to make Tor not use my whole
traffic quota up. Here's my config, with some notes
```
cyryl@serv:~$ cat /etc/tor/torrc
SocksPort 0 # what port to open for local application connections
SocksListenAddress 127.0.0.1 # accept connections only from localhost
RunAsDaemon 1
DataDirectory /var/lib/tor
ORPort 9001 #switches Tor to server mode
ORListenAddress tor.cyplo.net
OutboundBindAddress 91.213.195.28 #what IP address use to direct the outbound traffic
Nickname cyplonet
Address tor.cyplo.net
ExitPolicy reject *:* # no exits allowed
AccountingStart day 09:00
AccountingMax 2 GB
MyFamily cyplonethome, cyplonet
```

View file

@ -0,0 +1,19 @@
---
title: Disassembling Pioneer SE-50 headphones
date: 2011-12-20 11:12:11
tags: [hardware, repair]
---
Today's story is a photostory ! Here's how to disassemble Pioneer SE-50
headphones, as I found this task quite difficult, mainly because no info
available on the internet on how to open the can from the back. As you
can see Pioneer signs on the side of each can are removable. Take them
off to uncover the screws. Research made possible by
[kabanosy](http://en.wikipedia.org/wiki/Kabanos) - best multitool
ever. Some more info on the headphones: [scan of the user manual](/pdf/disassembling-pioneer-se-50/se-50-b-user-manual.pdf)
Produced between '68 and '72.
Two speakers per can !
Photos taken after replacing the cable and before pots rejuvenation.
Enjoy :)
{{< gallery dir="/galleries/disassembling-pioneer-se-50" />}}

View file

@ -0,0 +1,61 @@
---
title: Bitcoin wallet recovery using Photorec
date: 2012-04-01 21:06:01
tags: [bitcoin, recovery]
category: linux
aliases:
- /2012/04/01/bitcoin-wallet-recovery-photorec/
- /posts/2012/04/01/bitcoin-wallet-recovery-photorec/
---
Ever cried over damaged disk holding your bitcoin wallet ? Ever
formatted storage holding a perfectly valid and the only copy of your
wallet.dat ? Yes, me too. Well, there's
[Photorec](http://www.cgsecurity.org/wiki/PhotoRec), an opensource file
recovery software. It just skips the partition and filesystem info and
scans for plain file signatures on the disk. Unfortunately it does not
recognize Bitcoin's Wallet as a file format. Bitcoin wiki
[states](https://en.bitcoin.it/wiki/Wallet) that wallet.dat is just a
good old BerkeleyDB. Just let me check that with the famous `file`
command:
```
[cyryl@uglybook ~]$ file .bitcoin/wallet.dat
.bitcoin/wallet.dat: Berkeley DB (Btree, version 9, native byte-order)
```
That is right. Photorec however, has its own file-type
detection magic. Let's see whether it works on the wallet.
```
[cyryl@uglybook ~]$ fidentify .bitcoin/wallet.dat
.bitcoin/wallet.dat: unknown
```
Uhoh. Do not abandon hope for not all is lost ! Photorec [provides a
way](http://www.cgsecurity.org/wiki/Add_your_own_extension_to_PhotoRec)
for you to add custom signatures. The only missing ingredient appears to
be the knowledge possed by the `file` command. Just check your linux installation for
`file`'s detection signatures. On my system these are to be found
under `/usr/share/misc/magic`. We need file extensions, offset and some
magic number for Photorec to work. Well, extension is `.dat`, magics
and offsets are stored in the file mentioned above. Translate offsets
from decimal-based to hexs and take care of endianess and done :)
```
[cyryl@uglybook ~]# cat .photorec.sig
dat 0x0 0x00061561
dat 0x0 0x61150600
dat 0x0 0x00053162
dat 0x0 0x62310500
dat 0xc 0x00061561
dat 0xc 0x61150600
dat 0xc 0x00053162
dat 0xc 0x62310500
dat 0xc 0x00042253
dat 0xc 0x53220400
dat 0xc 0x00040988
dat 0xc 0x88090400
```
That is it. Happy wallet recovery.

View file

@ -0,0 +1,31 @@
---
title: Enabling wireless on Dell Latitude 2110 under CrunchBang Linux
date: 2012-07-08 09:18:59
tags: [BCM43224, crunchbang, dell, linux]
category: linux
---
Update:
**Please note that I no longer host this kernel as it is very old and also Crunchbang is no more.**
Some time ago I got my hands on [CrunchBang Linux](http://crunchbanglinux.org/) distro, which is great. The only thing there that could be better is that my Broadcom BCM43224 WiFi card is not working. But I'm used to it, as many
Linux distros do not provide it. As I happen to
know from my Fedora experiences that it is resolved in kernel version
3.4 . Also as that kernel tends to be generally faster on my machine, I
decided to compile just this one. Here you are. [Vanilla 3.4.4](http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.4.4.tar.bz2)
kernel, [compiled](http://crunchbanglinux.org/forums/topic/18060/how-to-compile-the-kernel-from-source/)
for Intel Atom and with BCM43224 support. This is a 64bit kernel.
Download [crunchbanglinux-kernel-3.4.4-intel\_atom.tar.bz2]
and:
```
dpkg --remove firmware-linux
dpkg --remove firmware-linux-nonfree
tar jxf crunchbanglinux-kernel-3.4.4-intel_atom.tar.bz2
cd linux-3.4.4-intel_atom/
dpkg -i *.deb
reboot
```
Should work for Debian Squeeze also but not tested on that system. Email me if you'd
like it compiled any other way, e.g. for another processor.

View file

@ -0,0 +1,14 @@
---
title: Boosting your WiFi signal with cooking utensils
date: 2012-08-01 18:13:31
tags: [hardware]
category: hardware
---
I love cooking and well made utensils. That is a part of the
solution. And the problem ? Boosting WiFi signal. Get WiFi USB dongle
and put it where the bowl's focal point is. Or near it, wherever you get
strongest signal boost. I get up to 2x stronger reception with the
antenna presented.
{{< figure src="/images/boosting-your-wifi-signal/IMG_0230.jpg" position="center" >}}

View file

@ -0,0 +1,79 @@
---
title: How to build minimalistic REST server in .Net
date: 2012-11-01 13:22:39
tags: [dotnet]
category: programming
---
Install "Microsoft ASP.NET Web API Self Host" and all of its
dependencies via NuGet.
```csharp
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using System.Web.Http;
using System.Web.Http.SelfHost;
namespace resttest
{
public class ExampleData { public int Id { get; set; } }
public class ExampleController : ApiController
{
// /api/job
public IEnumerable<ExampleData> Get()
{
return new List<ExampleData>()
{
new ExampleData(){ Id = 2 },
new ExampleData(){ Id = 4 }
};
}
// /api/job/3
public ExampleData Get(int id)
{
return new ExampleData() { Id = 3 };
}
}
class Program
{
static void Main(string[] args)
{
var configuration =
new HttpSelfHostConfiguration("http://localhost:1337");
//Setup the routes
configuration.Routes.MapHttpRoute(
name: "DefaultApiRoute",
routeTemplate: "api/{controller}/{id}",
defaults: new
{ controller = "ExampleController", id = RouteParameter.Optional }
);
var server = new HttpSelfHostServer(configuration);
server.OpenAsync().Wait();
Console.Out.WriteLine("Press ESC to quit");
do
{
while (!Console.KeyAvailable)
{
Thread.Sleep(256);
}
} while (Console.ReadKey(true).Key != ConsoleKey.Escape);
}
}
}
```
We can launch it and query it using any http-enabled client:
`$ curl http://localhost:1337/api/Example [{"Id":2},{"Id":4}]`
Happy hacking !

View file

@ -0,0 +1,24 @@
---
title: I bought a book
date: 2012-12-22 15:20:47
tags: [books]
category: books
---
I bought a book I didn't read before. On the Internet. Paying person I
don't personally know, basing on his reputation only. Believing he would
send me a paper copy of his words from the remote land I've never even
visited. On the other hand he trusted me to write this review and gave
me the discount for the very fact. And yet everything worked out pretty
well. I transferred the money, got the book, read it and I'm writing the
promised review right now. How is that possible ? How does people trust
each other in this "trust no one" place of the 'nets ? [Liars and
Outliers](http://www.amazon.com/gp/product/1118143302/ref=as_li_tf_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1118143302&linkCode=as2&tag=cyplonet-20)
tries to answer that questions. Is this a good read, breathtaking one ?
No, I sometimes struggled to get through. Does it need to be so
intriguing though ? Also no. It is a solid collection of examples on how
does the trust work in our societies and where it did all come from. It
tries to categorize the situations in which trusting is needed and
provides a look into how such can be exploited by some. If you are even
somewhat interested in the cogs moving the society, this book might turn
some of your implicit, intuitive knowledge into explicit one.

View file

@ -0,0 +1,8 @@
---
title: 29c3
date: 2012-12-28 14:21:10
tags: [CCC]
category: freedom
---
{{< gallery dir="galleries/29c3" />}}

View file

@ -0,0 +1,22 @@
---
title: Laptop I'd love to buy
date: 2013-01-15 17:12:50
category: hardware
---
- support for 16GB of RAM
- contact smartcard reader
- ~2kg weight
- 11-14" matte, non-glossy screen
- good keyboard, with separate [home], [end], [insert], [delete]
buttons
- screen resolution of >=1440
- HDMI
- at least 2x USB2.0 and 1x USB3.0
- no optical drive
- no thunderbolt
- no firewire
- TPM
- HSDPA modem
Is there something similar that you recommend for me to buy/build ?

View file

@ -0,0 +1,14 @@
---
title: Erlang Tech night
date: 2013-03-06 18:41:34
tags: [erlang, talk]
category: talks
---
I recently gave a two hours presentation on Erlang 101 along with
[Fabian](http://fabiankrol.com/). Because of the generous support
from Infusion I was able to speak at Institute of Computer Science at
University of Wroclaw. The event was a blast, and that is an opinion of
the people other than me also ;) Thanks for everyone who was there and
you can find the code and slides on
[github](https://github.com/erlang-wroclaw/erlang_tech_night)

View file

@ -0,0 +1,23 @@
---
title: Assembling RepRapPro Mendel
date: 2013-03-23 11:36:49
tags: [3d printing, hardware]
category: hardware
---
It was December, 3D printing fever struck me. Here is the first day of
me building ReprapPro Mendel printer.
{{< gallery dir="galleries/assembling-reprap/1" />}}
Second day of building. Fun fact: you can see the daylight come and go ;)
{{< gallery dir="galleries/assembling-reprap/2" />}}
Next few days - Featuring hotbed and plastic extruder !
{{< gallery dir="galleries/assembling-reprap/3" />}}
Yet another few days - Hot header installation, lots of wiring, tests and... it's alive !
{{< gallery dir="galleries/assembling-reprap/4" />}}

View file

@ -0,0 +1,54 @@
---
title: Building NAS - hardware
date: 2013-03-31 20:27:52
tags: [nas, hardware]
category: hardware
---
For a long time I've wandered the voids of the intertubes in search for
the parts for my NAS/backup/home server solution.
Online backup solutions are fine to some extent, however, someone else
than you then is going through that cat picture collections of yours and
other private stuff. Also the cloud tends to not be the cheapest option
possible.
## Case
{{< figure src="/images/building-nas-hardware/Node304_nocover-top-view_lores.jpg" postition="center" >}}
The need for moar virtual disk space calls for moar physical disk space.
To build any reasonable RAID solution you need 4-5 disks at least, plus
maybe one for system disk. That is a minimum of 6 disk slots in the
computer case. As I have quite some variety of disks laying around I'd
like this case to have an option for using either 3.5" or 2.5" disks.
Ah, and the whole thing should not take big amounts of space. Hence
["Node 304" by Fractal Design](http://www.fractal-design.com/?view=product&prod=94).
Although I intend my NAS to be fanless in the future, I couldn't resist
to leave the  original fans connected. These are beautiful, supersilent,
hydraulic bearing fans, all included in the Node's package.
There are really only two things that you need to be wary of: metal
where the screwholes are is very soft and it is easy to damage - and
that if you have a PSU without detachable cables you're going to have
bad time fitting these inside.
By the way, other small but nice thing is a user manual for this case.
Nice paper, nice clean print and friendly attitude. Attention to details
!
## Motherboard
{{< figure src="/images/building-nas-hardware/lkpq2whHjMbzUSAU_500.jpg" postition="center" >}}
What board supports at least 6 SATA disks, has low power consuption and
fits into miniITX format ? Easily obtainable in central Europe ? And not
super expensive ?
It took almost 3 months for me to find it, I present to you [Asus P8H77-I](http://www.asus.com/Motherboards/P8H77I/). All the usual stuff, most of it I don't need, frankly and not so low on power, but hey - 6 SATA in miniITX.
One caveat - it needs 2 power cables connected to function. The
usual 24-pin EATX and also 4-pin ATX 12V also called P4. What if your
PSU has only those fancy new EPS8 and EPS6 as mine do ? Fear not,
connect half of the EPS8 into P4 port and everything shall be okay.

View file

@ -0,0 +1,116 @@
---
title: Building NAS - software
date: 2013-05-26 21:32:07
tags: [nas, linux]
category: server
---
## Operating system
I think my NAS box build is no longer in much flux, so I thought it'd be
nice to describe it. I had some disks laying around, I had them
installed and started playing with the software setup.
```
Disk /dev/sda: 60.0 GB, 60022480896 bytes
Disk /dev/sdb: 320.1 GB, 320072933376 bytes
Disk /dev/sdc: 160.0 GB, 160041885696 bytes
Disk /dev/sdd: 250.1 GB, 250059350016 bytes
Disk /dev/sde: 500.1 GB, 500107862016 bytes
```
First one is an SSD drive, I used it for OS
installation.  I went for [Crunchbang](http://crunchbang.org/) as I
was already familiar with it, however now I'm thinking of just getting
newest Debian there, as it's finally released. Nothing fancy about the
OS, a regular install really.
```
storage# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 53G 2.4G 48G 5% /
/dev/sda1 461M 31M 407M 7% /boot
```
As you can see `/` filesystem takes little amount
of space, hence the next thing I plan on doing is actually move `/` to
USB3.0 pendrive and then free the SATA drive from it's current duties.
I'm reluctant to do so right now, as moving swap to pendrive might
result in significant wear. I'm thinking of getting more RAM and then
getting rid of the swap at all. These stats were acquired after reboot,
there are some loads under which I saw swapping occur.
```
storage# free -m
total used free shared buffers
Mem: 1636 282 1354 0 53
-/+ buffers/cache: 166 1470
Swap: 1903 0 1903
```
## Software + configuration
NAS means SAMBA, right ? That's what I thougth. RAID5 + SAMBA for Win
clients and NFS for others. After a while I got accustomed to this setup
and started playing with my photo collection as it was laying on NAS.
The problem ? I deleted one photo and wanted it back. It was nowhere to
be found. RAID5, although having internal copies for resiliency, was
visible as one drive only and happily deleted the data when asked to.
What I really needed was a backup solution, not a NAS. My final answer
to that:
```
storage# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 53G 2.4G 48G 5% /
/dev/md0 294G 36G 243G 13% /mnt/array_back
/dev/sde1 459G 35G 401G 8% /mnt/array_front
storage# cat /etc/fstab
#
/dev/mapper/vg_storage-root / ext4 errors=remount-ro 0 1
UUID=b9d32208-edc0-4981-ab74-5da1e7348a1a /boot ext4 defaults 0 2
/dev/mapper/vg_storage-swap none swap sw 0 0
/dev/md0 /mnt/array_back ext4 defaults 0 2
/dev/sde1 /mnt/array_front ext4 defaults 0 2
storage# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Apr 21 22:47:38 2013
Raid Level : raid5
Array Size : 312318976 (297.85 GiB 319.81 GB)
Used Dev Size : 156159488 (148.93 GiB 159.91 GB)
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
3 8 49 2 active sync /dev/sdd1
```
One disk `[sde]` serves as a front for all user operations. After a while, all changes
except for deletions are being flushed onto `[array_back]` which is a
RAID5 matrix.
```
storage# cat /etc/cron.daily/90_sync_front_to_back
#!/bin/bash
rsync -avr /mnt/array_front/ /mnt/array_back/back
```
## Secret sauce
[ownCloud](http://owncloud.org/). `[array_front]` is not directly
exposed via SAMBA or NFS, it's governed by ownCloud instance, and then
only ownCloud sync client on the computer or phone gets to mess with the
data. By having such setup I get 3 copies of each file. One on device,
one on the front array and one on the back array. What is also cool
about ownCloud is that it also handles contacts and calendar storage for
me. One more step towards getting all my data off google ! Points for
improvement:
- `[array_front]` is not an array now. It's just a disk. Make it an
proper disk array.
- encrypt the data from array_back and send it to S3 and then let it
graduate to Glacier

View file

@ -0,0 +1,13 @@
---
title: How to remove multiarch in debian 7
date: 2013-06-13 09:04:37
tags: [debian, linux]
---
Just a quick one, for me to remember and for you to enjoy.
```
dpkg -l | grep :i386 | cut -s -d ' ' -f3 | xargs apt-get remove -y
dpkg --remove-architecture i386
apt-get update
```

View file

@ -0,0 +1,97 @@
---
title: Automating running Debian in VirtualBox
date: 2013-07-16 14:22:47
tags: [linux, debian, virtualbox]
category: server
---
I'm experimenting with service separation by having each service run in
its own operating system, all of the sharing hardware though. Why ?
Separation seems to be the only secure approach to running any software.
Check [Joanna's](http://theinvisiblethings.blogspot.com/) blog out. I
went with VirtualBox on Debian 7 host, with, well, Debian 7 guests.
First I've prepared template VM by creating a new VM and just proceeding
with install. Then I tried spawning some clones of that, but having it
done manually takes quite some time  and is error prone. Did somebody
say automation ? Yes ! First, clone a VM, regenerating MAC addresses and
making sure the resource caps are good:
```
vmrunner@storage:~$ cat prepare_vm
#!/bin/bash
set -e
if [ $# -ne 2 ] ; then
echo "usage: $0 vm_name vm_number"
exit 0
fi
VM_NAME="$1"
RAM_AMOUNT="128"
DISK_SIZE="2000"
VM_NUMBER="$2"
RDP_PORT=$(($2+3389))
EXECUTION_CAP="50"
VBoxManage clonevm fresh.cyplo.net --name $VM_NAME --mode machine --register
VBoxManage modifyvm $VM_NAME --vrde on
echo "setting RDP listening port to $RDP_PORT"
VBoxManage modifyvm $VM_NAME --memory $RAM_AMOUNT
VBoxManage modifyvm $VM_NAME --vrdeport $RDP_PORT
VBoxManage modifyvm $VM_NAME --nic1 bridged --bridgeadapter1 eth0
VBoxManage modifyvm $VM_NAME --pae on
VBoxManage modifyvm $VM_NAME --cpuexecutioncap $EXECUTION_CAP
VBoxManage modifyvm $VM_NAME --hpet on
VBoxManage modifyvm $VM_NAME --hwvirtex on
VBoxManage modifyvm $VM_NAME --pagefusion on
VBoxManage modifyvm $VM_NAME --dvd none
VBoxManage modifyvm $VM_NAME --autostart-enabled on
VBoxManage modifyvm $VM_NAME --macaddress1 auto
VBoxManage modifyvm $VM_NAME --macaddress2 auto
echo "vm set up, listing all VMs:"
VBoxManage list vms
```
Then run the VM and change it into Debian
service host with new name and some software:
```
vmrunner@storage:~$ cat kickstart_debian
#/bin/bash
set -e
if [ $# -ne 2 ] ; then
echo "usage: $0 new_hostname new_domainname"
exit 0
fi
NEW_HOSTNAME="$1"
NEW_DOMAINNAME="$2"
NEW_FQDN="$NEW_HOSTNAME.$NEW_DOMAINNAME"
aptitude update
aptitude dist-upgrade -y
aptitude install vim atop sudo -y
hostname
ifconfig
set -v
echo "$NEW_FQDN" > /etc/mailname
echo "$NEW_HOSTNAME" > /etc/hostname
sed -i "s/dc_other_hostnames\='.*'/dc_other_hostnames='$NEW_FQDN'/g" /etc/exim4/update-exim4.conf.conf
sed -i "s/127\.0\.1\.1.*/127.0.1.1 $NEW_FQDN $NEW_HOSTNAME/g" /etc/hosts
rm /etc/ssh/ssh_host_*
dpkg-reconfigure openssh-server
reboot
```
The script above needs to be run on guest, I'm using scp with known ssh keys to upload it and
then run via ssh. This step is to be automated in the future. Points to
improve:
- updating all the guests at once [Chef?]
- monitoring all guests at once [Nagios?]
Summarizing, I'm now running a Tor node, file server, caldav, carddav
and some other services on my home server. All of them in separate VMs.
And it's running quite well with 2GB of RAM. For more info on my home
server build check its
[hardware](/posts/2013/03/31/building-nas-hardware/) and basic
[software](/posts/2013/05/26/building-nas-software/).

View file

@ -0,0 +1,52 @@
---
title: New laptop
date: 2013-08-10 10:45:20
tags: [linux, hardware]
category: hardware
---
Hey, remember my [search for the new laptop](/posts/2013/01/15/laptop-would-love-to-buy/) ?
It's finally over ! I found that Clevo, Taiwanese custom laptop
manufacturer, has recently added 13'3 FullHD laptop base to their
offerings - W230ST. And to my surprise - there is a [Polish importer](http://www.bluemobility.pl/) which allows pretty neat specs
calibration for you. Game on you say ? Indeed.
## What I like about it:
- Haswell-based, so all the new tech is here, same as in the newest
MacBooks
- up to 16GB RAM, I have 8GB installed for now and it works pretty good
- 2x mPCIx mSATA capable slots
- nice copper cooling inside
- FullHD matte display
- 4x USB
- HDMI
- typing on its keyboard, just clicks with me
- survived [OHM2013](https://ohm2013.org/site/) camp - not scared of
humidity and hot air
## What I do not like so much:
- Haswell-based, so Linux support is not that great yet, everything
seems to be working okay, however power consumption is off the
limits. 60Wh battery lasts for ~2h tops.
- while the outer side of the case is of nice rubberrized plastic, the
inside is cheap-looking grey one. There was no option to change it
unfortunately, while I see that [other](http://www.xoticpc.com/)
importers/assemblers around the world have such mods avaiable.
- the looks of the keyboard, purely visual stuff like the font used to
print the characters, the layout is okay
## Running Linux on it:
Everything seems to be working by default on
most of the distros. I've tested Gentoo, Arch, Fedora and Crunchbang.
The only thing that needed some tweaking was that by default I was
unable to control backlight brightness at all. Adding `acpi_backlight=vendor` to the kernel boot parameters, as suggested
on [Arch wiki](https://wiki.archlinux.org/index.php/Intel_Graphics#Backlight_not_fully_adjusting.2C_or_adjusting_at_all_after_resume.),
solved the problem. I haven't delved deeply into Optimus yet, so I don't
know whether the graphics cards switching works correctly or not. The
one thing I've noticed is that, after waking from deep sleep, so after
leaving the lid closed for a long time, not for few minutes, there are
some visual artifacts on screen. Also, as mentioned before, power
consumption worries me a bit. Will keep you posted !

View file

@ -0,0 +1,21 @@
---
title: Idea for a curated links directory
date: 2013-09-02 18:47:15
---
I read a lot. Of books and news alike. Oh, and whitepapers too. And some
recipes maybe. Some of them I find by my RSS reader. Some are given to
me by colleagues at work, some arrive by mail, sometimes even in
somewhat curated, described form. I read through most of them. And
resend. Often I want to share a particular piece with friends of mine
and possibly some other people too. Most often I do this by pasting the
link into the IM window or email it to a particular person I know might
be interested. Then I recall the other person who would also love to
hear about this. Then I need to search the IM logs or my Delicious. And
all of this only to hear that the people mentioned did not have the time
to read the article referred but would love to in the future. Just not
right now. I need a system for this. I read something, I archive it in
the system. People sign up or I do sign them up. People receive my
curated links list in a more or less regular email. But there is also a
public, searchable web directory for all the previous editions. Is there
anything like it around ? Open source please. Selfhosted please. If not - I'm just gonna write it.

View file

@ -0,0 +1,24 @@
---
title: Debuggers - how do they work ?
date: 2013-09-18 17:26:26
tags: [talks]
---
I gave a talk [yesterday](http://www.meetup.com/wrocnet/events/138787702/). It was
on
[debuggers](https://prezi.com/hzvnsznfkgwj/anatomy-of-the-debugger/).
From VM-level debuggers to JTAG, quite a packed, high level overview. In
fact I gave it
[before](http://www.meetup.com/KGD-NET/events/113593522/) on another
occasions. And this time was different, hence this post. The thing I am
particularly proud of is that I was able to entertain the crowd for the
first half of an hour despite not having any computer launched and
connected. You know, like, interacting with actual people ! We had some
difficulties connecting to a beamer so I thought it'd be good to have an
open discussion first. We've started with "unit tests vs debuggers". Is
having a rock solid unit tests suite enough for you to never touch a
debugger ? After some time I've transitioned to my presentation, talked
a bit about the history of debugging, then what are the features of
modern debuggers and finally how all of these work. And while we're at
it I would like to give a shout out to the organizers of these 2 events.
Thanks !

View file

@ -0,0 +1,18 @@
---
title: New dotfiles repository
date: 2013-10-26 10:35:52
tags: [cygwin, dotfiles, linux, mac, vim]
category: open source
---
Hi ! Just letting you know that I've been working for some time on
structuring my dotfiles and finally published them to
[github](https://github.com/cyplo/dotfiles). These contain e.g. my
[vimrc](https://github.com/cyplo/dotfiles/blob/master/.vimrc) and
[font](https://github.com/cyplo/dotfiles/tree/master/.local/share/fonts)
[configs](https://github.com/cyplo/dotfiles/tree/master/.config/fontconfig/conf.d).
I've [patched](https://github.com/cyplo/vim-colors-solarized)
the famous [solarized](http://ethanschoonover.com/solarized) theme
for vim to allow better
[gitgutter](https://github.com/airblade/vim-gitgutter) symbols
display. Enjoy !

View file

@ -0,0 +1,106 @@
---
title: Booting Gentoo with LUKS+LVM2+systemd
date: 2013-11-10 21:20:26
tags: [gentoo, linux, luks, systemd]
category: linux
---
I've spent quite some time recently trying to get a laptop running
Gentoo boot from an encrypted partition with LVM. I thought that this
might be useful for someone else, so here you are: First things first:
I'm assuming you've followed Gentoo handbook and are operating from
within livecd's shell. You've done the regular luksFormat + lvm stuff
and you've come up with a layout similar to this one:
```
dagrey ~ # lsblk
NAME SIZE TYPE MOUNTPOINT
sda 55.9G disk
└─sda1 55.9G part
└─crypthome (dm-3) 55.9G crypt /home
sdb 29.8G disk
├─sdb1 485.4M part /boot
└─sdb2 29.4G part
└─root_sdb2-vg-root (dm-0) 29.3G crypt
├─vg-swap (dm-1) 8G lvm [SWAP]
└─vg-root (dm-2) 21.3G lvm /
```
You need a kernel to boot this, a kernel that understands crypto stuff as well as
lvm.
`genkernel --symlink --save-config --no-mrproper --luks --lvm --udev --menuconfig all`
If you're using `gentoo-sources` you'll notice the fancy gentoo-specific menu on top. Go there and check
systemd. Apart from the usual stuff, please make sure to check stuff on
[this list](https://wiki.gentoo.org/wiki/Systemd), and also this one:
```
Device Drivers
Multi-device support (RAID and LVM)
[*] Multiple devices driver support (RAID and LVM)
<*> Device mapper support
<*> Crypt target support
Cryptographic API
<*> SHA256 digest algorithm
<*> AES cipher algorithms
```
Your setup is so new that you need grub2. Grub2 is very picky about its configuration. Take this one and
avoid hours of reading:
```
dagrey ~ # cat /etc/default/grub
GRUB_DISTRIBUTOR="Gentoo"
GRUB_DEFAULT=0
GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=3
GRUB_PRELOAD_MODULES=lvm
GRUB_CRYPTODISK_ENABLE=y
GRUB_DEVICE=/dev/ram0
# Append parameters to the linux kernel command line
GRUB_CMDLINE_LINUX="real_init=/usr/bin/systemd quiet real_root=/dev/mapper/vg-root crypt_root=/dev/sdb2 dolvm acpi_backlight=vendor"
```
You're using initrd to set everything up for the kernel, so you need `real_root` and `real_init` instead of
regular ones. `cryptdevice` no longer works, use `crypt_root` And
`dolvm` is essential, without it only the first part will work, leaving
you with open crypt container and kernel panic just afterwards. Also
notice `GRUB_DEVICE`, `GRUB_CRYPTODISK_ENABLE` and `GRUB_PRELOAD_MODULES`. Make sure the first partition on the disk you're
installing grub onto is starting at 2048. If it's any earlier grub just
won't be able to fit its magic in there. Finally, install grub
```
grub2-install --modules="configfile linux crypto search_fs_uuid luks lvm" --recheck /dev/sda
grub2-mkconfig -o /boot/grub/grub.cfg
```
That should be sufficient to boot the system and initialize root. What
about those other encrypted partitions like `/home` though ? Well, init
subsystem needs to initialize them, OpenRC did such by reading
`/etc/fstab` and then `/etc/dmcrypt/dmcrypt.conf` accordingly. Systemd is a
bit different here. You still need your `/etc/fstab` entries for it to
know which partitions need to be initialized. The place where you say
how to map and decrypt crypto containers, however, is in `/etc/crypttab.`
```
dagrey ~ # cat /etc/fstab
/dev/sdb1 /boot ext2 defaults 1 2
/dev/mapper/vg-root / ext4 defaults 0 1
/dev/mapper/vg-swap none swap sw 0 0
/dev/mapper/crypthome /home ext4 defaults 0 2
dagrey ~ # cat /etc/crypttab
#crypthome /dev/sda1
crypthome /dev/sda1 /etc/conf.d/dmcrypt.key
```
The keyfile is available from then already decrypted root partition. You can also skip the key and the
you'll get a password prompt, sometimes hidden somewhere in systemd messages. Hit enter to reveal it once more.

View file

@ -0,0 +1,17 @@
---
title: 30C3
date: 2013-12-28 12:08:16
tags: [CCC]
category: freedom
---
I'm writing this on the beginning of the day 2 of 30C3, as day 1 was so
packed with action that I was not able to sit down and type, not even
for a little while. [Glen Greenwald's keynote](https://www.youtube.com/watch?v=gyA6NZ9C9pM) was
moving the crowd, making the audience interrupt him with rounds of
applause every few minutes. Lots of mobile phone network exploitation
talks along with general anti-buffer-overflow techniques. [Tor folks talking about the interesting times](https://www.youtube.com/watch?v=CJNxbpbHA-I) we live in. Quite
a day. Here are some photos, with no Congress people, except for me, in
them, as the tradition goes.
{{< gallery dir="galleries/30c3" />}}

View file

@ -0,0 +1,28 @@
---
title: Wordpress and nonstandard ports and protocols
date: 2014-01-01 11:32:36
tags: [tls, wordpress]
category: software
---
I needed to set up a Wordpress installation where https is on
nonstandard port and the admin interface lives in that land, while the
site itself is using plain http.
In `wp-config.php`:
```php
if (!empty($_SERVER['HTTPS'])) {
define('WP_SITEURL', 'https://example.com:12345');
define('WP_HOME', 'https://example.com:12345');
}
else {
define('WP_SITEURL', 'http://example.com');
define('WP_HOME', 'http://example.com');
}
```
This allows the installation to properly see resources like
images and css if accessed via nonstandard port. So if you don't see
image previews in the admin panel, your styles look weird, or you just
don't see new posts' previewes - this might be it.

View file

@ -0,0 +1,54 @@
---
title: Meetings
date: 2014-01-21 20:13:05
tags: [social]
---
Meetings come in different shapes and forms. Let's chat about them.
# Decisions
Most of the meetings come from the desire to have a decision made. The
problem is in most cases is that these are not decisions to be made now.
Software prototyping is cheap. We should just try to build a working
solution and iterate around. Let's prototype. Get someone most annoyed
with the problem and leave them to build it. Of course, the clearer
communication of what they are actually doing the better. It should be
something like hey Im gonna build this okay or even hey, I've
built that, lets see how it behaves Does not need to be “we should now spend
multiple meetings on discussing how this should be done.
# Sharing the knowledge
Other possible reasoning behind having a meeting can be that of some
knowledge needs to be shared. And that's a noble cause. Just don't make
a meeting out of it. Make a lecture. A presentation. No audience members
interacting with each other. Speaker talking and maybe sometimes
allowing questions. The knowledge sharing sessions are oftentimes a
prelude to the decisionmaking meetings. See above.
# Confirming your ideas
Sometimes however somebody just wants some confirmation on their idea,
maybe before building a prototype. Then, there is a good chance that
they already know who they should ask. No meeting then. Just ask the
people you know you should ask. 1-on-1 interaction. Maybe somebody will
overhear and start listening. Notice that the social dynamic is very
different from the meeting then, two people having a conversation and
another one politely listening, maybe being invited to the conversation
after some while. Just look how it works in between the talks on
conferences. Very different from “everybody says everything” meetings.
# The meetings that are left
Also, if for some reason you really need to have a meeting make
it opt-in. Just the people who are interested coming. Set the timer.
There is one I particularly like - a clock showing amount of money
wasted so far by this meeting.
## Post scriptum
[37 signals on meetings](https://m.signalvnoise.com/meetings-are-toxic/):
These folks have the idea of every communication should be async and read
when convenient, hence their emphasis on email. That gets you to really
think of your proposal and really describe it, which is good. To stop and think, RFC-style.

View file

@ -0,0 +1,44 @@
---
title: Compiling tarsnap on RaspberryPi
date: 2014-02-01 21:59:27
tags: [raspberry pi, tarsnap]
---
Just a quickie for `tarsnap 1.0.35.` Featuring my new favourite, download software called aria2.
```
aptitude install aria2 libssl-dev zlib1g-dev e2fslibs-dev
aria2c https://www.tarsnap.com/download/tarsnap-autoconf-1.0.35.tgz
aria2c https://www.tarsnap.com/download/tarsnap-sigs-1.0.35.asc
gpg --recv-key 2F102ABB
gpg --decrypt tarsnap-sigs-1.0.35.asc
sha256sum tarsnap-autoconf-1.0.35.tgz # should get the value from sig file, 6c9f67....9a
tar xf tarsnap-autoconf-1.0.35.tgz
cd tarsnap-autoconf-1.0.35/
./configure
time nice ionice make -j2
```
How do I know that `-j2` really gives some advantage on raspi ? Well, here is the benchmark:
```
#fresh, j1
real 14m7.129s
user 6m30.790s
sys 0m21.640s
#-j2
real 11m33.868s
user 6m36.690s
sys 0m19.880s
#-j1 again, caches warmed up
real 12m38.598s
user 6m30.960s
sys 0m20.470s
#-j2 again
real 10m14.975s
user 6m34.980s
sys 0m20.710s
```

View file

@ -0,0 +1,8 @@
---
title: Today we fight back
date: 2014-02-11 15:47:54
---
If you don't have JS disabled you might have noticed the huge black
banner on the bottom of this blog. Stop mass surveillance. This is the
day we fight back. https://thedaywefightback.org/international/

View file

@ -0,0 +1,90 @@
---
title: Sunpowered server setup
date: 2014-02-16 21:13:46
tags: [hardware, raspberry pi, sunpowered]
bigimg: [{ src: "/galleries/sunpowered/IMG_0942.jpg" }]
series: sunpowered
---
Remember my [NAS](/tags/nas) ? It
turned out great ! Very reliable storage, I had a disk failure once and
haven't noticed for some time because all the files were just there.
Hardware enhanced virtualization is another great stuff. I ended up
migrating  all my of infrastructure there, each service in separate
virtual machine; email, calendar, contacts, tor node and such. Only
caveat ? Power consumption. This setup just eats Watts. About 50W
constant power usage is not something you want to have turned on
24h/day. One such day I had a realization that this giant ball of plasma
that is hanging out there might be of some use. One side of my balcony
is to the south somewhat, gets lots of sunshine no matter whether it's
morning or evening. Why not exploit that ? That's how my first
solarpowered server setup was born. Enjoy the photos ! Also please find
upgrade options and general notes after the break ! There's a [part 2](/posts/2014/04/21/adding-voltage-current-measurements-sunpowered-pi/) of this post you might be interested in as well.
Setup itself consists of:
- 144W solar panel
- 33Ah 12V battery
- trusty old WRT54GL
- Raspberry Pi model B
- charging controller
- 12V and 5V step up/step down converters. Don't use linear converters,
especially for the 5V rail. As these will give you the 5V by
dissipating the difference from 12V directly onto their heatsinks,
huge power loses.
- around one hundred M3 hexhead screws with nuts and washers. yup.
The router acts as wireless bridge to my home WiFi network, there are no
cables running from inside the house to the balcony. Router and raspi
use about 8W total. It is winter in here now and  this seems to be
holding nicely, panel being able to charge the battery for the night
during relatively short day, even if the weather is bad. However, I want
more computing power there and this setup does not seem to be very
scalable. Another raspi model B means another 4W constant power usage. I
estimate the whole thing will start loosing power during the night with
about 15W constant consumption. Which is okay for stuff like email
server, but not really for blog or other sites. Hence my first idea for
improvement: discard router and change for the separate raspis, model A,
with wireless network cards each. Should be much better. Some general
notes:
- Use equipment specifically designed for DC. You want to disconnect
the solar panel or battery sometimes. To be able to to that without
that fancy sparks show you need proper DC switch able to handle high
currents. AC switches as any other equipment dragged from AC land are
not really a choice. If you use AC mains switch to switch high
current DC you might end up with nice weld in place of your switch.
Same for fuses.
- My ability to cut acrylic to line is nonexistent. Probably maybe use
better tools ? Or even, since I now know how the box should be cut -
just order pieces for box 2.0 cut to size already.
- Same for my ability to make stuff look nice and clean
- I like the look of bare PCBs inside of transparent box though
- The box itself seems to be holding up nicely against below zero
temperatures as well as rain.
- Air flow is nice, nothing is heating up. Air enters from the bottom,
heats up a bit and moves up. Goes through the holes on the left, into
the funnel and exist on the right. Water does not enter as there is
pretty steep slope there.
- Watch for SD card corruption. Most often, the cause is having 5V not
really being 5V. Raspi does not really like lower voltages. One
preventive measure would be not to use some cheap voltage converters.
Another is to mount SD card with very conservative options. I use `/dev/mmcblk0p2 / ext4 defaults,rw,data=journal,journal_checksum,discard 0 1`
- Make sure your electronic components are rated for -40C to +80C
- I'm a bit worried of battery being in such proximity to the airco
unit. We'll see in the summer whether it needs relocation, for now
the unit is completely powered off.
- mountain climbing equipment comes in handy when hanging stuff from
your balcony
Future improvements:
- most pressing: get the power usage down by changing to model A + wifi
card
- add monitoring, something like ADC connected to raspi's GPIO ports,
gathering voltages all across. I would like to get readings on: solar
panel voltage, battery voltage, 5V rail actual voltage and the whole
system power usage at least
- more safety fuses and bypass diodes
{{< gallery dir="galleries/sunpowered" />}}

View file

@ -0,0 +1,24 @@
---
title: WiFi on RaspberryPi
date: 2014-02-24 20:45:32
tags: [debian, raspberry pi]
---
Easiest way I've found to configure wifi on RaspberryPi, not really
being mentioned when you search for such. Not using separate
wpa_supplicant configuration at all.
```
root@frontend:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
iface eth0 inet dhcp
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-ssid NETWORK_NAME
wpa-psk NETWORK_KEY
iface default inet dhcp
```

View file

@ -0,0 +1,11 @@
---
title: PoC or GTFO
date: 2014-03-17 10:08:21
tags: [pocorgtfo]
---
Hey to all you curious folks the new issue of PoC or GTFO is well,
out ! The best minds in infosec on exploits and more. Lots of hardcore
programming knowledge to obtain. For your convenience, as its not that
easy to get this on the webs, Ive took a liberty of uploading them
[here](/pages/files)

View file

@ -0,0 +1,33 @@
---
title: Changing Terminal preferences in Gnome 3 from the commandline
date: 2014-03-29 22:34:21
tags: [gnome3]
---
It turns out this was not that obvious, at least for me, how to change
various profile preferences for Gnome Terminal under Gnome 3 from the commandline. You can go
and fetch the list of profiles this way:
```
~# dconf list /org/gnome/terminal/legacy/profiles:/
:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/
```
And then you can use the profile id to list and change various settings:
```
~# dconf list /org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/
foreground-color
login-shell
palette
use-system-font
use-theme-colors
font
bold-color-same-as-fg
bold-color
background-color
audible-bell
~# dconf write /org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/font "'Inconsolata for Powerline Medium 18'"
~# dconf write /org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/login-shell true
```

View file

@ -0,0 +1,68 @@
---
title: Adding voltage and current measurements to the sunpowered Pi
date: 2014-04-21 14:39:13
tags: [hardware, raspberry pi, sunpowered]
bigimg:
[
{
src: "/images/adding-voltage-current-measurements-sunpowered-pi/voltages.png",
},
]
series: sunpowered
---
Remember my [sunpowered Pi](/posts/2014/02/16/sunpowered-server-setup/)
installation on the balcony ? The only indication I had, since recently,
on how well the system is doing was an LED blinking red when battery was
low. Not really helpful in making predictions or making measurements
over the time. I've searched for simple ADC solution to hook up into Pi
and allow for some voltage and power measurements. I've decided on [ADC Pi](https://www.abelectronics.co.uk/p/69/adc-pi-raspberry-pi-analogue-to-digital-converter),
as it seemed very straightforward in its design and also comes with
example python code. Did I say Python ?! It turns
out that Python has an excellent [graph plotting library](http://matplotlib.org/) as well as it is very well equipped
with general purpose functions and hardware specific stuff support, like
i2c. But back to the system design. The solar panel itself can generate up to
about 30V under no load. Battery is 12V nominal but can get to 14.5V
when charging, the specs say. ADC Pi however, can measure up to 5V. You
have some resistors laying around you say ? Voltage divider it is. For
current measurement I've used [hall effect sensor module](http://www.pololu.com/product/1185) from Pololu. Summarising,
I have 8 measurement lines, hooked up as follows:
1. unused :D
2. voltage of the power line from Pi to current sensor
3. panel voltage
4. battery voltage
5. load voltage
6. regulated 12V rail voltage
7. regulated 5V rail voltage
8. current sensor reading voltage
It turned out that for the current measurement to be accurate I can't
trust the voltage coming from the Pi to be stable. I actually measure
that and use that to calculate the real reading. The docs round the hall
sensor are not that obvious at the first sight, so here's the formula I
came up with, for the greater good:
`current = (measured_sensor_voltage - (voltage_powering_sensor/2))/0.185`
Finally, sprinkle all of the above with [some python](https://github.com/cyplo/sunpowered) and get the following:
{{< figure src="/images/adding-voltage-current-measurements-sunpowered-pi/voltages.png" position="center" >}}
Looking at this you can spot 2 things. One - April the 5th
was supercloudy superrainy and the battery did not get much charge. And
the second being that on the average day the battery gets to the full
capacity near the half of the sunny period. This is the data I was after
! I think this means that I can add more batteries to the system and on
a sunny day they will get filled up and this should suffice for couple
of rainy days in a row. Here, get some photos as the bonus, also:
TODOs, for me, and everyone else, to remember that nothing is ever finished in the
realm of hobby projects:
- add graph for power usage
- would be super cool to add the weather data to the graphs
- clean up python stuff, add more unit tests there
- write next post on the above ;)
{{< gallery dir="galleries/adding-voltage-current-measurements-sunpowered-pi/" />}}

View file

@ -0,0 +1,13 @@
---
title: Better crypto initiative
date: 2014-06-19 12:38:07
tags: [crypto, tls, security]
---
It's not common for me to put just one link here as the content of the
article. Today, however, is one of these days. The
[bettercrypto](https://bettercrypto.org/) folks came up with
really excellent practical guide to securing your servers. A draft for
now, but already sporting a high concentration of knowledge. BTW - it
being a draft means that you should read it and contribute to it as
well.

View file

@ -0,0 +1,14 @@
---
title: Mount your filament spool on the wall
date: 2014-06-19 13:01:59
tags: [3d printing, hardware]
category: hardware
---
I find it useful to hang as much stuff as possible on the walls, instead
of using shelf/floor/desk space for that. Here is just a quick hack I
did to allow filament spools to be mounted securely to a wall. It turns
out that if you cut off the bent part the rod it fits perfectly into the
spool's handle hole.
{{< gallery dir="galleries/filament-spool-wall" />}}

View file

@ -0,0 +1,56 @@
---
title: Compiling git 2.0 on cygwin
date: 2014-07-05 10:06:59
tags: [cygwin]
---
I had some troubles compiling git 2.0 under cygwin. I present you with a
very dirty hack to do so. Proper patch will probably follow. Did I
mention that the hack is dirty and will make your hands burn if you're
gonna type it in ?
```
git clone https://github.com/git/git.git
cd git
git checkout v2.0.1
autoconf
./configure
# so far so good...
make
# oops
# ....
SUBDIR perl
make[2]: /home/cplotnicki/dev/git/perl/0: Command not found
perl.mak:375: recipe for target 'blib/lib/.exists' failed
make[2]: *** [blib/lib/.exists] Error 127
Makefile:16: recipe for target 'all' failed
make[1]: *** [all] Error 2
Makefile:1653: recipe for target 'all' failed
make: *** [all] Error 2
```
Want perl as '0' ? Well, why not. Here you are:
```
# link perl as 0 to some directory that is in your path
ln -s /usr/bin/perl ~/tools/0
# also link here, obviously.
# 'perl' is a direct subdir for your git checkout
ln -s /usr/bin/perl perl/0
make # should now compile
# cygwin's permission scheme is very peculiar
# here, have a dirty hack for your default install
# where there is no su or sudo
chmod a+rwX -R /usr/local/share/man/man3
chmod a+rwX -R /usr/local/lib/perl5/site_perl
make install
git --version
# should display 2.0.1.
# update man pages' cache manually
/etc/postinstall/man-db.sh
```

View file

@ -0,0 +1,90 @@
---
title: Poor man's secrets storage
date: 2014-09-26 10:33:31
tags: [security]
---
I'm a bit cautious when it comes to storing my passwords and other
secrets. I do not use any web or desktop applications to do this for me.
How do I remember those passphrases then ? I have a central file server,
accessible via a tunnel. I store there a gpg-encrypted file containing a
tar archive of a directory with various files containing secrets.
Syncing these files across computers became a bit cumbersome lately. I'm
using git to version them, but because I do not want to have the sync
server to contain unencrypted secrets I needed to bake some custom
solution. [Bash](https://access.redhat.com/articles/1200223) to the
rescue ! There are still some assumptions made here about permissions,
directories layout and some stuff not failing, but I'm sure you'll be
able to figure this out and tweak to your needs.
```
#!/bin/bash
TUNNEL_CREDS="user@tunnelhost"
TUNNEL_PORT=123
STORAGE_CREDS="storage_user@localhost"
STORAGE_ADDRESS="storagehost.example.org"
SOCKET="/tmp/black_socket"
REMOTE_VAULT_PATH="/somepath/.vault.tar.gpg"
TMP_VAULT="/tmp/.vault.tar.gpg"
TMP_VAULT_TAR="/tmp/.vault.tar"
TMP_VAULT_DIR="/tmp/.vault"
TMP_LOCAL_PORT=10022
LOCAL_VAULT_DIR="$HOME/.vault"
LOCAL_VAULT_BACKUP_DIR="$LOCAL_VAULT_DIR.bak"
pushd `pwd`
echo "removing old vault backup at $LOCAL_VAULT_BACKUP_DIR"
rm -rI "$LOCAL_VAULT_BACKUP_DIR"
set -e
echo "backing up local vault..."
cp -r "$LOCAL_VAULT_DIR" "$LOCAL_VAULT_BACKUP_DIR"
echo "establishing tunnel ..."
ssh -L $TMP_LOCAL_PORT:$STORAGE_ADDRESS:22 $TUNNEL_CREDS -p $TUNNEL_PORT -N -f -M -S "$SOCKET"
echo "tunnel ready, copying remote version of the vault..."
rsync --progress -avz -e "ssh -p $TMP_LOCAL_PORT" "$STORAGE_CREDS:$REMOTE_VAULT_PATH" "$TMP_VAULT"
echo "decrypting new vault..."
gpg -d "$TMP_VAULT" > "$TMP_VAULT_TAR"
echo "unpacking new vault..."
mkdir -p "$TMP_VAULT_DIR"
tar xf "$TMP_VAULT_TAR" -C "$TMP_VAULT_DIR"
echo "pulling from remote vault..."
cd "$LOCAL_VAULT_DIR"
git pull "$TMP_VAULT_DIR"
echo "pulling to remote vault..."
cd "$TMP_VAULT_DIR"
git pull "$LOCAL_VAULT_DIR"
echo "cleaning up a bit..."
rm -fr "$TMP_VAULT_TAR"
rm -fr "$TMP_VAULT"
echo "packing refreshed remote vault..."
tar pcf "$TMP_VAULT_TAR" -C "$TMP_VAULT_DIR" .
echo "encrypting refreshed remote vault..."
gpg -c "$TMP_VAULT_TAR"
echo "sending out updated vault"
rsync --progress -avz "$TMP_VAULT" -e "ssh -p $TMP_LOCAL_PORT" "$STORAGE_CREDS:$REMOTE_VAULT_PATH"
echo "cleaning up.. "
rm -fr "$TMP_VAULT_DIR"
rm -fr "$TMP_VAULT_TAR"
rm -fr "$TMP_VAULT"
echo "closing tunnel.."
ssh -S "$SOCKET" -O exit $TUNNEL_CREDS
popd
```

View file

@ -0,0 +1,44 @@
---
title: Running Eagle on Ubuntu 14.10 64bit
date: 2014-11-01 11:08:10
tags: [eda, eagle, linux]
---
Eagle is still the first choice when it comes to Open Hardware
electronics design. That's a bit unfortunate because the software itself
is proprietary. Sometimes you need to run it though. For example to
migrate projects over to non-proprietary software ! Say, you'd like to
run new Eagle 7.1 under Ubuntu ? Try repos. Repos have the old major
version 6 only. The harder to get proprietary software the better, I
suppose. Download the blob then:
```
$ wget -c http://web.cadsoft.de/ftp/eagle/program/7.1/eagle-lin-7.1.0.run
$ chmod a+x eagle-lin-7.1.0.run
```
Inspect and run:
```
$ vim eagle-lin-7.1.0.run
$ ./eagle-lin-7.1.0.run
Ensure the following 32 bit libraries are available:
libXrender.so.1 => not found
libXrandr.so.2 => not found
libXcursor.so.1 => not found
libfreetype.so.6 => not found
libfontconfig.so.1 => not found
libXi.so.6 => not found
libssl.so.1.0.0 => not found
libcrypto.so.1.0.0 => not found
```
32bit craziness, you say.
New Ubuntu does not have ia32 libs prepackaged, you say ? Here, have
this handy list of all of the dependencies then:
```
$ sudo apt-get install libxrandr2:i386 libxrender1:i386 libxcursor1:i386 libfreetype6:i386 libfontconfig:i386 libxi6:i386 libssl1.0.0:i386 libcrypto++9:i386
# should show you the installation wizard [sic !]
$ ./eagle-lin-7.1.0.run
```

View file

@ -0,0 +1,23 @@
---
title: Tor talk
date: 2014-11-01 12:19:49
tags: [tor, talks]
category: talks
description:
categories: talks
wp-status: publish
---
I gave a talk this Monday, an important one I think. The one from the
kind of spreading  knowledge on the safe internet usage to people not
necessarily of the tech background. This was my first one given to a
such audience and to add to it all, it was given in Polish. The biggest
challenge ? Finding good equivalent for the English tech terms. I think
the talk went quite okay and the discussion afterwards was
lively. I talked a bit on how the internet works and what's wrong with
that, to transition later to what problems Tor addresses and which it
does not. I tried to emphasize that using Tor does not make you
automatically immune to the dangers of the internet. Big thanks to the
organizers, [Praxis](http://praxis.ue.wroc.pl/) student group from
the Wroclaw University of Economy. You can find my slides
[here](https://github.com/cyplo/talks/tree/master/tor_for_beginners), along with speaker notes.

View file

@ -0,0 +1,19 @@
---
title: GUI Vagrant box
date: 2014-11-09 16:02:13
tags: [linux, vagrant]
---
Recently I've started working on changing my default development
workflow. I'm evaluating vagrant as a main env manager, and then docker
for extra speed. In short, my `vagrant up` boots up new dev box and
then couple of docker containers. What I've found is that there is not
really a plethora of GUI-enabled vagrant boxes, so I've created one ! If
you want to use it, go:
```
vagrant init
cyplo/ubuntu-gnome-utopic-gui vagrant up
```
I will write about the whole setup later, as I'm not yet sure what approach is best for me.

View file

@ -0,0 +1,14 @@
---
title: Standing desk
date: 2015-01-06 14:19:40
tags: [hardware]
---
It was some time since the last photo-story so, please accept these
pictures of my standing desk. On the actual desk, there is a laptop
stand serving a role of a keyboard and mouse rest. Laptop itself is
flipped on its back, motherboard attached to the back of what once was a
lid. The whole thing is flying on standard monitor desk mount, using
custom vesa-to-acrylic mounting system ;)
{{< gallery dir="galleries/standing-desk" />}}

View file

@ -0,0 +1,85 @@
---
title: Backing up and restoring whole block devices
date: 2015-02-14 08:30:51
tags: [raspberry pi, hardware]
---
SD cards are not really a reliable storage, especially when used
constantly e.g. while sitting in always powered-on Raspberry Pi. Because
of that I've recently needed to perform lots of backup/restore
operations ;) I wrote this script for backing up:
```
#!/bin/bash
if [[ -z $1 ]]; then
echo "usage: $0 device_to_clone"
exit
fi
device=$1
timestamp=`date +%Y%m%d`
dest_file="/tmp/$timestamp.dd.xz"
echo "about to clone $device to $dest_file"
echo "ctrl-c or [enter]"
read
sudo umount $device?
sudo umount $device
sudo sync
sudo pv -tpreb $device | dd bs=4M | pixz > $dest_file
sudo sync
```
And this one for restoring:
```
#!/bin/bash
if [[ -z $1 ]] || [[ -z $2 ]]; then
echo "usage: $0 restore_file.xz device_to_restore_to"
exit
fi
source_file=$1
if [[ ! -f $source_file ]]; then
echo "cannot open $source_file"
exit
fi
device=$2
echo "about to restore $source_file onto $device"
echo "ctrl-c or [enter]"
read
sudo umount $device?
sudo umount $device
pv -tpreb $source_file | pixz -d | sudo dd bs=4M of=$device
sudo sync
sudo eject $device
```
Some of the more fun features include progressbars and making sure you've unmounted the
device properly before ;) This also uses parallel threads to deflate the
data, so the XZ compression should not be a bottleneck on any modern
machine. The scripts above were used to backup and restore SD cards but
will work for any block device, be it an external or internal disk
drive, etc. usage example [remember to use the whole device, not just
its partition as an argument]:
```
./backup_sdcard /dev/sdc
about to clone /dev/sdc to /tmp/20150214.dd.xz
ctrl-c or [enter]
[sudo] password for cyryl:
umount: /dev/sdc1: not mounted
umount: /dev/sdc2: not mounted
umount: /dev/sdc: not mounted
19,6MiB 0:00:02 [9,72MiB/s] [> ] 0% ETA 0:52:26
```

View file

@ -0,0 +1,15 @@
---
title: Tools - PCB holder
date: 2015-03-28 18:33:53
tags: [hardware, electronics]
series: tools
---
I thought it would be cool to share with you the tools I find
surprisingly useful. Behold the first in the series: the PCB holder ! I
cannot overstate how much is that of a difference from the 'third
hand'-type of holders. The grip is very firm but won't scratch the
surface nor short anything because the jaws are made from a soft
plastic. And the whole thing **ROTATES** !
{{< gallery dir="galleries/tools-pcb-holder" />}}

View file

@ -0,0 +1,35 @@
---
title: CNC router arrives
date: 2015-04-26 13:17:05
tags: [hardware, cnc]
series: cnc
---
After 2 months of waiting - my CNC router arrives. 8 weeks lead time
they said - 7 weeks and 4 days it was ! Who are they ?
[TanieCNC](http://tanie-cnc.pl/) people [CheapCNC in Polish :].
Although it may look like they don't know how to make websites AND their
name does not instill a lot of confidence - but hey, they certainly
know how to weld and make precise machinery ! The size of the package
caught me off guard, I've spent an hour disassembling the crate in full
sun. After that I wasn't able to get it through the stairs myself,
fortunately a friendly neighbour gave me their pair of hands. Lifting
the machine by 2 people is okay, it's still not lightweight, but
bearable. Putting it on the table was a different affair entirely.
Careful not to damage anything, especially the motor assemblies - we've
put it on a impromptu wood ramp. Using heavy duty straps, we've lifted it
up little by little. Then some inspection - the quality is really
superb, especially of the metal frame ! After that I got an old PC with
Windows XP and parallel port running Mach3 software - I wanted to set it
up as in any other shop at start. Later on I'm planning on moving to
LinuxCNC and then gradually off parallel port on to a USB stack,
something more like an arduino parsing gcode and driving motors instead
of relying of the accurate timing of the PC. TODOs:
- add an MDF bed layer on top of existing bed
- get better clamps
- get more router bits
- get a vacuum attachment for the spindle
- move to LinuxCNC
{{< gallery dir="galleries/cnc-router-arrives" />}}

View file

@ -0,0 +1,34 @@
---
title: Update on the sunpowered server
date: 2015-05-02 14:54:05
tags: [observability]
series: sunpowered
---
Some, rather long, time ago I've added a [custom python](/posts/2014/04/21/adding-voltage-current-measurements-sunpowered-pi/)
data acquisition and graphing to my sunpowered RaspberryPi installation
on the balcony. Since then I've upgraded it to Raspi2 and ported the
data thingy to influxdb + grafana. All 3 of those things I am very
positively surprised by.
RaspberryPi2 - definitely worth the upgrade - it's a speed demon now.  Small caveat - I recommend
installing raspbian from scratch, especially if you had some custom
overclocking config, as these do not seem to be compatible between Pi1 and Pi2. Also RasPi2 needs a microsd card instead of full-sized one. As for
the software - since everything went surprisingly smoothly this post is
not much of a tutorial. Just go to [influxdb](http://influxdb.com/)
and [Grafana](http://grafana.org/) and go through the respective
installation documentation. You need x86 64bit server to host this, so
unfortunately no self-hosting on RaspberryPi - at least I wasn't able to
compile the software there. I've [changed the original python scripts slightly](https://github.com/cyplo/sunpowered/tree/master/software),
to upload the data to influxdb instead of graphing directly via
matplotlib. Then configured grafana to display some cool graphs and that
was pretty much it - you can see the result at
[data.cyplo.net](http://data.cyplo.net/dashboard/db/sunpowered).
Right now I'm testing 2 different sizes of solar panels and
batteries, hooked at the same time. The ADC is connected as it was
before though, so a TODO is to add more measurements, to see how the
individual  panels' output change during the data and how does it affect
each of the batteries.
{{< gallery dir="galleries/sunpowered-data" />}}

View file

@ -0,0 +1,80 @@
---
title: My CNC machining workflow
date: 2015-05-28 19:01:33
tags: [cnc]
series: cnc
---
Hello ! Today we'll talk about driving CNC machines, toolpaths and
Debian, so stay tuned ! I have a separate old PC for driving the CNC
machine via parallel port. This is, as they say, the Only Proper Way and
It Was Always Like That. I'm thinking about changing this to
USB+[grbl](https://github.com/grbl/grbl) in the future then ;) Up
till now, my workflow went as follows;
- Do the research and drawing/parts modeling in FreeCad, on my main workstation
- Export to e.g. DXF
- Import in HeeksCad
- Design machining operations, export gcode
- Copy gcode to a network drive
- Switch the monitor and keyboard to the one of the old PC
- Open gcode in LinuxCNC and go
What happens if it's not perfect at first try though ?! Most commonly the issue  is with the
toolpath, like I want to change feeds or speeds or depth of cut, rarely
it is with the part itself, fortunately. It may have something to do
with the fact that I'm mostly working with other people's parts for now
;) Anyway, to do any correction I need to switch back to the main
workstation, correct in Heeks, re-export to the network drive and switch
back, reimport. Not very annoying but not very convenient either. But
wait. What if...I install Heeks on the old PC ?! It's running
[LinuxCNC](http://linuxcnc.org/) realtime distro, which is based on
Debian wheezy. Heeks packages are available prepackaged for Ubuntu only.
With the help of the
[documentation](https://code.google.com/p/heekscad/wiki/BuildDebianPackages)
and the comments there and in other corners of the internet I was able
to get this little script done:
```bash
#!/bin/bash
set -e
set -v
sudo apt-get update
sudo apt-get -y install liboce-visualization-dev libwxgtk2.8-dev libgtkglext1-dev python-dev build-essential bzr git libboost-dev libboost-python-dev subversion debhelper cmake liboce-ocaf2 liboce-ocaf-dev oce-draw
mkdir heeks_build
cd heeks_build
svn checkout http://libarea.googlecode.com/svn/trunk/ libarea
cd libarea
dpkg-buildpackage -b -us -uc
cd ..
sudo dpkg -i libarea*.deb python-area*.deb
svn checkout http://heekscad.googlecode.com/svn/trunk/ heekscad
cd heekscad
dpkg-buildpackage -b -us -uc
cd ..
sudo dpkg -i *heeks*.deb
svn checkout http://heekscnc.googlecode.com/svn/trunk/ heekscnc
cd heekscnc
dpkg-buildpackage -b -us -uc
cd ..
git clone https://github.com/aewallin/opencamlib.git
cd opencamlib
bzr branch lp:~neomilium/opencamlib/packaging debian
dpkg-buildpackage -b -us -uc
cd ..
sudo dpkg -i python-ocl*.deb
sudo dpkg -i heekscnc*.deb
```
Run this on your LinuxCNC machine and that's it. It will download and
build all the dependencies and Heeks CAD and CAM packages. This way, my
current workflow goes more like this:
- Do the research and drawing/parts modeling in FreeCad, on my main workstation
- Export to e.g. DXF to a network drive
- Switch the monitor and keyboard to the one of the old PC
- Import in HeeksCad
- Design machining operations, export gcode
- Open gcode in LinuxCNC and go
- Repeat last 2 steps if necessary - no machine switching

View file

@ -0,0 +1,23 @@
---
title: Containerized zombie spawner
date: 2015-08-12 21:21:13
tags: [docker]
---
Recently I was playing with a fully Dockerized setup of Jenkins at work
and found a curious issue there. Whenever Jenkins was polling the git
server the side effect was that it created a zombie ssh process. The
issue is actually
[remediated](https://github.com/jenkinsci/docker/issues/54) by the
Jenkins team now by [explicitly using](https://github.com/jenkinsci/docker/commit/d5aea67dcae9d62fe4ca6ad961ffe66f65d9a591)
a tiny init system called ... [tini](https://github.com/krallin/tini) started as the main
container's process instead of just starting Jenkins there. This tiny
tini thing can properly adopt and reap the children. I was all like -
wow, what a great blog entry is coming at me. I was planning to describe
how zombies come to existence on Linux and why Docker should, in my
opinion, provide an adopter-reaper by default and other very interesting
things ! But then I found a really excellent article by the
[Phusion](http://www.phusion.nl/) team
[here](https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/)
explaining all that and more. It is very good. You should read it. That
is it. The end. Happy reaping !

View file

@ -0,0 +1,46 @@
---
title: When life gives you temp space - make lemonade !
date: 2015-09-02 20:00:24
tags: [cloud, azure, linux]
---
Some VPS providers, e.g. Azure provide you with 2 disk drives for
your VPSes. One, of very limited size, system disk, and the other one,
spacy but with not guarantees that the data survives reboot. Basically
it means that you can have a small VPS, with a small amount of RAM but
large temp disk space. Why this could be useful ? Imagine tasks with
lots of mem requirements but that not need to be extra fast, where
swapping is allowed. Like complex nightly builds. Here is a set of super
simple scripts I've come up with to quickly boot up a system, and then
in the background add a new swap file on the temp drive there. The temp
drive is assumed to be under /mnt.
```
root@someazurehost:~# cat /etc/rc.local
#!/bin/sh -e
set -v
# do not wait for swap to become online,
# proceed with the boot further,
# with swap being created in the background
/etc/make_and_enable_swap &
exit 0
```
```
root@someazurehost:~# cat /etc/make_and_enable_swap
#!/bin/sh
set -e
set -v
# create new 2GB swap file
dd if=/dev/zero of=/mnt/swap bs=1M count=2048
chmod 0600 /mnt/swap
mkswap /mnt/swap
swapon /mnt/swap
```
Don't forget to make `/etc/make_and_enable_swap` executable !
Do not add this swap file to fstab, as it is being read before rc.local,
and this may certainly result in a boot failure, as the swap file would
not be ready yet.

View file

@ -0,0 +1,23 @@
---
title: Enabling USB 3.0 in already existing Virtualbox VMs
date: 2015-09-27 12:09:05
tags: [virtualbox]
---
Just a quick note on how to get USB 3.0 in Virtualbox for VMs that were
created with USB 1.1 support only. First, download VirtualBox Extension
Pack from [here](https://www.virtualbox.org/wiki/Downloads). Install
it. Then quit Virtualbox completely. Go to your directory that contains
your virtual machine and edit `.vbox` file. Replace the whole
`<USBController>` section with the following:
```xml
<USB>
<Controllers>
<Controller name="xHCI" type="XHCI"/>
</Controllers>
<DeviceFilters/>
</USB>
```
That's it, let me know if it works for you !

View file

@ -0,0 +1,19 @@
---
title: Tools - precision gauge holder
date: 2015-10-17 13:46:36
tags: [cnc, tools]
series: tools
---
This is the second part in the [series](/series/tools) on the tools I use. Tools that are
surprisingly useful, tools that are not that obvious to find. Today:
how to calibrate the CNC axis without actually cutting anything ? Use a
test indicator ! How to hold the meter steady though, ? Attach it to the
frame of your router using the power of magnets ! Sample item on Amazon
[here](http://www.amazon.com/gp/product/B00OZA71H6/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=B00OZA71H6&linkCode=as2&tag=adventucomput-20&linkId=6ZJKCJHMA2YON4LY)
[affiliate link warning]
Despite being attached to the frame by its back instead of the bottom it
still holds beautifully.
{{< gallery dir="galleries/tools-indicator-holder" />}}

View file

@ -0,0 +1,98 @@
---
title: Fixing timeouts running Android integration tests
date: 2015-10-25 09:55:54
tags: [android, testing]
---
I run Android tests on CI and after having switched to Lollipop recently the integration tests wouldn't run. Invoking
`androidConnectedTest` gradle target always resulted in crashing with
`ShellCommandUnresponsiveException`. Internet says that in such a case
ou just need to set `ADB_INSTALL_TIMEOUT`. I tried to no avail.
Sourcediving it is then !
A long while after that I got to this file:
[Device.java](https://android.googlesource.com/platform/tools/base/+/master/ddmlib/src/main/java/com/android/ddmlib/Device.java) [Linking
to master, here's the commit
hash:`1cb1a4c2976b99ae53d28d7f01d975232c85f990`, as I don't seem to be
able to find how to link to that hash directly] 
What do we see there ? That indeed `ADB_INSTALL_TIMEOUT` is being used:
```java
static {
String installTimeout = System.getenv("ADB_INSTALL_TIMEOUT");
long time = 4;
if (installTimeout != null) {
try {
time = Long.parseLong(installTimeout);
} catch (NumberFormatException e) {
// use default value
}
}
INSTALL_TIMEOUT_MINUTES = time;
}
```
So far so good,
`ADB_INSTALL_TIMEOUT` system variable seems to be respected when
invoking package installation tools. Are the above the only methods that
can install a package though ? Going further on that hunch we see that
in addition to installing single packages there is a possibility of
having a multi-package installation session.
```java
public void installPackages(List<String> apkFilePaths, int timeOutInMs, boolean reinstall, String... extraArgs) throws InstallException {
assert(!apkFilePaths.isEmpty());
if (getApiLevel() < 21) {
Log.w("Internal error : installPackages invoked with device < 21 for %s",Joiner.on(",").join(apkFilePaths));
if (apkFilePaths.size() == 1) {
installPackage(apkFilePaths.get(0), reinstall, extraArgs);
return;
}
Log.e("Internal error : installPackages invoked with device < 21 for multiple APK : %s", Joiner.on(",").join(apkFilePaths));
throw new InstallException("Internal error : installPackages invoked with device < 21 for multiple APK : " + Joiner.on(",").join(apkFilePaths));
}
[...]
String sessionId = createMultiInstallSession(apkFilePaths, extraArgsList, reinstall);
```
Aha ! Non-Lollipop check here, with a
fallback to the old method - we may be onto something ! Some lines pass
and we can see an invocation of `createMultiInstallSession`. What's
there ?
```java
private String createMultiInstallSession(List<String> apkFileNames, @NonNull Collection<String> extraArgs, boolean reinstall) throws TimeoutException, AdbCommandRejectedException, ShellCommandUnresponsiveException, IOException {
[...]
String cmd = String.format("pm install-create %1$s -S %2$d", parameters.toString(), totalFileSize);
executeShellCommand(cmd, receiver, DdmPreferences.getTimeOut());
[...]
```
A different invocation of
`executeShellCommand`, now using `DdmPreferences.getTimeOut()` as a
timeout value source.
Summarizing - this only happens if you install
multiple applications for your `androidConnectedTest` and you are using
android device to test on that has api version that is equal or greater
to 21. That is all cool that we had this little Computer Science
Investigation, but how to fix that - i.e. how to have proper timeouts
for your installations ? Ideally from somewhere you configure and/or
invoke your builds. It turns out that gradle supports invoking just
enough Java for us to use there. In your `gradle.build` as the very
first lines:
```groovy
println "setting global timeout for apk installation to 10 minutes"
com.android.ddmlib.DdmPreferences.setTimeOut(600000)
android {
compileSdkVersion compileSdk
buildToolsVersion buildTools
[...]
```
That's it. Invoke your android tests with
`ADB_INSTALL_TIMEOUT` env variable set **AND** have the
`DddPreference` set in your `gradle.build` as in the example above and
you should be golden. Happy droiding !

View file

@ -0,0 +1,22 @@
---
title: Sporting Let's Encrypt certificate
date: 2015-11-17 09:51:25
tags: [tls, letsencrypt]
---
[Let's Encrypt](https://letsencrypt.org/) is a soon to be publicly
available service to provide TLS certificates for free and in automated
fashion. I'm helping them test their systems by participating in the
[beta programme](https://letsencrypt.org/2015/11/12/public-beta-timing.html).
Because of that this site is now sporting, publicly recognizable
certificate from Let's Encrypt. Yay ! The work they are doing I believe
is of much importance. Lack of easy path for getting TLS certificate was
a major hurdle for the most of the website owners, otherwise keen on
getting their traffic encrypted. This, in turn, is important for two
reasons. Not allowing the content you are viewing on the internet to be
snooped at, not leaking you private data is one thing. Other, less
known, but as much important, is countering the ability of injecting any
malicious code/data into the responses from the servers you got and in
turn preventing lots of man in the middle attacks. Hopefully, when Let's
Encrypt becomes publicly available we would be on the path to much more
friendly internet for all.  

View file

@ -0,0 +1,16 @@
---
title: Productive New Year's Eve
date: 2015-12-31 20:11:35
tags: [electronics]
---
It turns out that the last day of the year can be quite productive. I've
managed to sneak in a PCB layout for the input measurement and
protection board for my [sunpowered](/series/sunpowered). Got it
ordered, can't wait to test it ! I got carried away with via stitching
there a bit - we'll see how hard it will be to solder the larger
components because of that. The whole project is hosted
[here](https://github.com/cyplo/sunpowered/tree/master/electronics) - it's
open hardware so please do take a look.
{{< gallery dir="galleries/productive-new-years-eve" />}}

View file

@ -0,0 +1,98 @@
---
title: Conferences you did not know existed and now you know you just need to go to
date: 2016-03-13 11:43:03
tags: [social, conferences]
---
# Go to conferences. It's good for you.
Ever wondered why to go to conferences ? Is it worth it, especially if
you [do not feel confident](http://sarah.thesharps.us/2016/02/02/first-timers-guide-to-foss-conferences/)
that the conference is on something that would be in an immediate realm
of your day-to-day interests ? Just go ! You might end up learning
something and meeting new people. And new people mean more variables to
mix up your life. A bit scary, I know. Still worth it though. Wondered
what conference in **Europe** are weird enough so people wouldn't mind
having another misfit ? Here you go.
# nucl.ai
Now imagine something niche within the industry. To me, AI comes to
mind. Mages in robes in their towers, sending out autonomous robot
armies against each other. Right ? There's a conference for that ! At
[nucl.ai](http://nucl.ai/), in July, in Vienna, you can meet AI
professionals from academia and industry alike, most of them coming from
game development background. The atmosphere is very newcomer-friendly,
the only requirement is that you're interested in AI, not necessarily
having anything to do with it professionally. Petra and Alex, the
organizers, are wonderful people that make this conference quite unique.
It's happy fun to see ideas floating from academia to the industry and
vice versa. Don't forget the presentations on work-in-progress major AAA
games ! I've learned there that the everyday programming in the games
industry is quite different than the one I do in terms of best practices
and patterns. Definitely worth a try !
# Erlang User Conference
Speaking of something completely different. Erlang ! Was functional
before being functional was [considered cool](https://www.youtube.com/watch?v=xrIjfIjssLE). Very practical,
very fast, parallelize everything ! Processes everywhere ! Message
passing ! Want to feel the groove as well ? Visit Stockholm in September
for [Erlang User Conference](http://www.erlang-factory.com/). Meet
creators of the language, talk about massive scale deployments. Other FP
languages are welcome. I would even say, the more strange the better.
[Idris](http://www.idris-lang.org/) seems to be the topic of interest
as well. Eat good food, walk the city of dynamite. Erlang has a quite
close-knit community which gives you this warm family feeling.
# SoCraTes
Speaking of family. There's a family of conferences known as
[SoCraTes](https://www.socrates-conference.de/). Software
Crafstmanship and Testing. SoCraTes. Get it ? :D As it's more than one
conference a year, it's harder to go to all of them. I only went to
[Codefreeze](http://www.codefreeze.fi/), in Finland, in January, and
only once, but loved it already. Codefreeze, Finland, in January. Get it
?! :D First thing is that you don't know what will happen there. You
just go, meet the people and then you sit down and everyone talks about
what they would like to learn and what they can share. This way you come
up with some loose agenda you follow more or less. One track, very
intense. Talking, swimming in ice water, learning, sauna going. Very
intimate feel, not many people, lots of things to learn.
# Chaos Communication Congress
Speaking of lots of people and intimate feel. [Chaos Communication Congress](https://en.wikipedia.org/wiki/Chaos_Communication_Congress) is, I think, the biggest gathering of all nerds and freaks in Europe,
yet being there feels quite at home. The best organized conference I've
been to. Friendly people showing their wares on their humble tables, 4
simultaneous tracks, huge building with intricate hallways. It's
cyberpunk, it's inclusive, it's about hardcore infosec topics, it's
about politics. Everything's there, everyone's there. 4 days of data
being constantly uploaded to your brain, riding on the waves of [Club
Mate](https://en.wikipedia.org/wiki/Club-Mate). Feels like an
extended holiday present, as it's happening in the last week of
December.
# The Camps
[The Dutch Camp](https://en.wikipedia.org/wiki/Observe._Hack._Make.), [EMF Camp](https://www.emfcamp.org/) and
[Chaos Communication Camp](https://en.wikipedia.org/wiki/Chaos_Communication_Camp).
Imagine CCC, the Congress, but in the woods. Middle of nowhere,
multigigabit internet connection. sleeping in the tent, then going to a
world-class lecture. BothCCC, the camp, and The Dutch Camp, the camp,
are happening every 4 years, each of them taking alternating slots. So
every 2 years there is a camp somewhere. EMF is there even more frequently - every 2 years !
Did I mention that the location changes with almost every event ? Happy camping !
## Summary
So go out there. Explore. Meet people. Learn. Have fun. None of the
above are much expensive and most of the events have a policy of "write
to us if you want to go but can't afford the ticket". Friendly people
fixing problems together.
- nucl.ai: July, tickets around 250EUR
- EUC: September, tickets around 350EUR
- SoCraTes: the cost and the time depend on the event
- CCC: last week of December, tickets around 100EUR
- The Camps: summer, tickets around 125EUR

View file

@ -0,0 +1,12 @@
---
title: Don't throw away. Fix.
date: 2016-06-12 12:25:34
tags: [hardware]
---
Try fixing your things. It's already broken - no need to sweat.
You might learn something.
Level 2 - push for things to be easily fixable.
It's good for you. It's good for the planet.
{{< gallery dir="galleries/dont-throw-away-fix" />}}

View file

@ -0,0 +1,53 @@
---
title: The best terminal emulator for Windows 10's Bash or how to run X applications
date: 2016-07-06 19:21:13
tags: [windows]
aliases:
- /posts/2016/07/06/terminal-emulator-windows-10-bash.html
- /posts/2016/07/06/terminal-emulator-windows-10-bash
- /posts/2016/07/06/terminal-emulator-windows-10
---
**update as of 03/2017**
As this article is by far the most popular on my site right now I feel that an update is necessary, as the landscape evolved a bit since the original publish date.
[Conemu](https://conemu.github.io/) started to support Bash on Windows properly now and this is what I settled on in the end.
The article below is still relevant for _the running X applications on Windows_ part though.
---
I've been playing a bit with
[GNU/kWindows](https://mikegerwitz.com/2016/04/GNU-kWindows) a.k.a.
Bash on Windows a.k.a Windows Subsystem for Linux (Beta). I was
especially interested whether I can use my regular Linux
[dotfiles](https://github.com/cyplo/dotfiles/) to recreate my working
environment of zsh + tmux + vim. The biggest troubles I had were with
the terminal emulator. While the default one, invoked by saying **bash**
is much better than powershell or cmd.exe already - it lacks some things
I've learned to rely on, like 256-colour palette support. Thus the
search for the ultimate terminal emulator begun. I tried **ConEmu**,
**cmder** and their spinoffs to no avail. Either the colours were
lacking, or the emulator would swallow up certain strokes like the arrow
keys. Then I thought - hey, I use **terminator** on Linux, maybe it
would be possible to use it here as well. To my surprise the answer was - yes !
- install Windows Subsystem for Linux
- restart Windows
- install [VcXsrv](https://sourceforge.net/projects/vcxsrv)
- run VcXsrv
- invoke the following from the bash console:
```
sudo apt-get install terminator
DISPLAY=:0 terminator -u
```
- profit !
{{< figure src="/images/terminal-emulator-windows-10/capture.png" postition="center" >}}
The font rendering is not ideal and the borderless mode does
not work, but hey, it is quite usable nonetheless ! It even has
bidirectional clipboard sharing with Windows' windows, which is good.
P.S. konsole and gnome-terminal do seem to have troubles launching
(crashy-crashes there)

View file

@ -0,0 +1,53 @@
---
title: RustFest - organization was the best. Also rhymes.
date: 2016-09-18 18:43:47
tags: [conferences]
---
# I went to RustFest and it was amazing !
It was clearly the best conference organizational-wise I've been to so far.
It made me think of what I really liked there. What made it so awesome and welcomy ?
To me this was a large number of small things just coming together. I listing them here for me to remember and for others to use.
Let's make conferences at least this friendly, together !
## Before the event:
- very clear emails, repeating messages couple of times in different emails for the important stuff
- maps of the popular routes provided, e.g. from the airport and bus station
- supporter ticket sale - two times the price - you pay for another person that wouldn't be able to get a ticket otherwise
- survey on dietary requirements/allergies
- survey on childcare needs
- clear statement of the code of conduct for the conference
## During the event:
- very good MC person
- keeping people entertained through the tech breaks
- keeping tabs on the talk length, allowing appropriate amount of questions if the time allowed
- live captioning of all talks - small screen outside of the view of the main screen with the text live
- getting the next speaker prepared and mic-tested before the end of the current talk
- quiet room to rest with clear rules on no talking and interrupting there
- clear signage for the different parts of the venue (washrooms, quiet room, party space etc)
- washrooms
- all same, ungendered
- basic items for free in said washrooms
- chewing gum
- tooth paste
- tampons, pads
- baby wipes
- info desk/registration
- clear info during registration, handing out programme
- asking people if they want to be on the photos - giving out lanyards accordingly - flashy red lanyards for people who do not want photos of them taken
- emergency number to call with stated purpose
- code of conduct violation
- if you are lost in the city

View file

@ -0,0 +1,36 @@
---
title: Using ad hoc Docker volumes on SELinux systems
date: 2016-12-30 18:28:50
tags: [docker]
---
I've recently tried running some quick Docker commands using host's directory as a volume:
```
docker run -i -t -v `pwd`:/mnt debian:jessie bash -c "ls -hal /mnt"
ls: cannot open directory /mnt: Permission denied
```
I use Fedora as my main OS, which, it turns out, has some pretty nice SELinux settings. These deny access from inside the container to the outside.
Said Fedora consists mostly of almost-newest-but-stable everything though, which makes Docker to be in a fairly recent version.
A version that understands how to change a SELinux context for the directory we're mounting, by itself !
You need at least Docker v1.7 for this.
```
docker run -i -t -v `pwd`:/mnt:Z debian:jessie bash -c "ls -hal /mnt"
total 8.0K
drwxrwxr-x. 2 1000 1000 4.0K Dec 30 18:34 .
drwxr-xr-x. 21 root root 242 Dec 30 19:07 ..
```
Please note the capital `Z` as a mount parameter.
And that is it. Mostly. Some cleanup remains, as docker leaves the host's directory with a changed SELinux context.
To restore it you need to
```
restorecon -R `pwd`
```
Or use any other path you'd like instead of `pwd` in the examples above.
Happy dockering !

View file

@ -0,0 +1,34 @@
---
title: Running host programs in isolation inside one-off Docker containers
date: 2017-02-17 22:08:46
tags: [docker]
---
I am quite bad at remembering how to launch docker to have everything set up correctly. Hence the following - a script that launches any commandline specified in its arguments inside a new docker container. Current directory is mounted inside the container automatically, so the thing you are executing can have its local dependencies satisfied.
```bash
#!/bin/bash
USERNAME=`whoami`
MOUNT_PATH="/mnt"
CURRENT_DIRECTORY=`pwd -P` # untangle symbolic links if needed - SELinux needs the real path
IMAGE="debian:jessie"
if [[ -z $1 ]]; then
echo "usage: `basename $0` command_to_run_inside_a_container"
exit 1
fi
RESOLVED_ARGUMENTS="$@"
docker run -i -t -v "$CURRENT_DIRECTORY":"$MOUNT_PATH":Z $IMAGE bash -c "useradd -M -d '$MOUNT_PATH' $USERNAME && cd '$MOUNT_PATH' && bash -c '$RESOLVED_ARGUMENTS'"
# restore SELinux context for the current directory
restorecon_path=`which restorecon`
if [[ -x "$restorecon_path" ]]; then
restorecon -R "$CURRENT_DIRECTORY"
fi
```
I use vanilla Debian Jessie as a run platform there, mostly because this is what most of my servers run.
The script covers setting up SELinux and mounting the directory from which it is run as `/mnt` inside the container while also having the default non-root user added.
Run Jessie, run !

View file

@ -0,0 +1,7 @@
---
title: Upload your site to Netlify using their incremental deployment API
date: 2017-03-20 19:25:00
tags: [netlify]
---
I've recently switched to a setup where I do all my builds for this blog on [Travis](https://travis-ci.org/cyplo/blog). While doing so I needed to migrate away from using Netlify's internal build infrastructure. This resulted in a quick [python script](https://github.com/cyplo/netlify_deployer) that allows you to upload arbitrary directory tree to Netlify and does so using their [incremental deployment API](https://www.netlify.com/docs/api/#deploying-to-netlify). All that means that while this site is quite big in size the deployments go rather quickly ! There are some [known issues](https://github.com/cyplo/netlify_deployer/issues) but apart from them the script should just work for any custom Netlify deployment you would like to have. I use it on this very site, to have a preview of any PR before merging it as well as for deploying the main site after the PR is merged. I hope you will find it useful and please do not hesitate if you want to post an issue or a PR !

View file

@ -0,0 +1,53 @@
---
title: Adding graphs to posts in Nikola
date: 2017-07-15 20:13:50
tags: [nikola]
---
**Update as of April 2019**
This blog no longer runs on Nikola so the embedded examples do not work anymore.
The method described in the post is still valid though.
---
I really like to teach, try to explain things in a simple manner. There is often no better way of making an explanation than visualizing it.
The problem is that I really can't draw, especially on a computer.
Wouldn't it be awesome if I could make the computer draw for me ?
I found out that, unsurprisingly, there is a software for that already. The one I like is called mermaid - it renders a simple text description of a graph or diagram into an html representation. Can look something like this.
```html
<div class="mermaid">
graph TB subgraph one a1-->a2 end subgraph two b1-->b2 end subgraph three
c1-->c2 end c1-->a2
</div>
```
This blog is rendered by Nikola hence I would like to show you how I've added mermaid support to my Nikola installation.
I use `USE_BUNDLES = False` in `conf.py` as for it gives me more control and is more HTTP/2 friendly.
With that disabled I can include mermaid's style and js files like so (also in `conf.py`):
```python
EXTRA_HEAD_DATA = """
<link rel="stylesheet" type="text/css" href="/assets/css/fontawesome.css">
<link rel="stylesheet" type="text/css" href="/assets/css/titillium.css">
<link rel="stylesheet" type="text/css" href="/assets/css/mermaid.forest.css">
"""
BODY_END = """
<script src="/assets/js/mermaid.js"></script>
<script>mermaid.initialize({startOnLoad:true, cloneCssStyles: false});</script>
"""
```
Where do all these files come from though ? In my case, I have a custom theme, based on `zen` called `zen-cyplo`. The assets in the sources are located under `themes/zen-cyplo/assets/`. Oh, and `cloneCssStyles: false` is there as the default of `true` made the different css styles on my blog clash.
Finally, to use mermaid in the post do (for reStructured Text):
```html
<div class="mermaid">
graph TB subgraph one a1-->a2 end subgraph two b1-->b2 end subgraph three
c1-->c2 end c1-->a2
</div>
```
~~You can click on `source` button located below the title of this post to see it in action. If you are interested in the build process and how all these come together - the complete sources for this blog are hosted under https://github.com/cyplo/blog~~

View file

@ -0,0 +1,132 @@
---
title: Setting up Rust development environment using VSCode on a Mac
date: 2017-11-25 13:50:27
tags: [rust, vscode]
---
## Completion and highlighting
While on Linux VSCode with the Rust plugin seems to work more or less out of the box, on a Mac I needed to spend some time configuring it.
First things first though, let's start by installing Rust version manager, rustup.
```
curl https://sh.rustup.rs -sSf | sh
```
We will be using nightly version of rust as to have one version that can compile all of our tools. This is mostly due to [`clippy`](https://github.com/rust-lang-nursery/rust-clippy) requiring a nightly compiler.
```
rustup install nightly
rustup default nightly
```
We will need Rust Language Server to provide the code completion.
```
rustup component add rls-preview --toolchain nightly
rustup component add rust-analysis --toolchain nightly
rustup component add rust-src --toolchain nightly
```
For a more wholesome experience, please have some tools as well:
```
cargo install clippy rustfmt rustsym
```
Now finally, for the VSCode itself, press `cmd-p` and `ext install vscode-rust`. I'm using the new `Rust` extension as `Rusty Code` has been discontinued.
If you're lucky - that's it, you should have working completion and highlighting in Rust files. Check this by opening any Rust source code file.
If you're greeted by this message: `You have chosen RLS mode but neither RLS executable path is specified nor rustup is installed` - then we need to get the extension to get to know your setup a bit:
In VSCode go to `Settings` using `cmd-,` and put the following config elements there:
```json
{
"rust.cargoPath": "/Users/yourusername/.cargo/bin/cargo",
"rust.cargoHomePath": "/Users/yourusername/.cargo",
"rust.rustfmtPath": "/Users/yourusername/.cargo/bin/rustfmt",
"rust.rustsymPath": "/Users/yourusername/.cargo/bin/rustsym",
"rust.rustLangSrcPath": "/Users/yourusername/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/src/rust/src",
"rust.mode": "rls",
"rust.rls": {
"executable": "/Users/yourusername/.cargo/bin/rls",
"useRustfmt": true
}
}
```
As the paths in the config need to be absolute, remember to adjust to your situation (system username) accordingly.
Now when you reload and start editing a Rust file you should see `RLS: Analysis finished` on the bottom bar and the completion and highlighting should all work. Yay !
## Building and testing
VSCode has a system of tasks that we can leverage to run the build and test from within VSCode.
If you go to `Tasks->Configure` tasks it will create an empty `tasks.json` file in your repository.
Change it to the following to allow for `cargo` to be hooked up as your build tool and test runner.
```json
{
"version": "2.0.0",
"tasks": [
{
"label": "build",
"type": "shell",
"command": "cargo build",
"group": {
"kind": "build",
"isDefault": true
},
"problemMatcher": []
},
{
"label": "test",
"type": "shell",
"command": "cargo test",
"group": {
"kind": "test",
"isDefault": true
}
}
]
}
```
You can use `cmd-shift-b` to run the build now.
## Debugging
For the native debugger to work we need to install another extension to VSCode called ['LLDB Debugger'](https://github.com/vadimcn/vscode-lldb/blob/master/MANUAL.md). That would be `cmd-p` and `ext install vadimcn.vscode-lldb`.
After reloading VSCode you should be able to set breakpoints on the side gutter and run the program using debugger by pressing `F5`. First time doing this will result in the debugger choice window. Choose `LLDB Debugger` as your debugger and you will be greeted with a JSON configuration file in which you need to tell the debugger a few details on your project. It may look like this:
```json
{
"version": "0.2.0",
"configurations": [
{
"type": "lldb",
"request": "launch",
"name": "Debug",
"program": "${workspaceRoot}/target/debug/name_of_your_executable",
"args": [],
"cwd": "${workspaceRoot}",
"preLaunchTask": "build"
}
]
}
```
And that should be it !
Now you should be able to set breakpoints and debug through the code.
Start the debugging session by pressing `F5` again - this should result in the build proceeding and then the debugger launching.
## Questions ?
Any questions ? Ask on [https://users.rust-lang.org/](https://users.rust-lang.org/) and ping me the link to the post on [Twitter](https://twitter.com/cyplo) or email it to me at [blog@cyplo.net](mailto:blog@cyplo.net). This way the answer will be visible to everyone in the community.
Keep on Rusting !

View file

@ -0,0 +1,53 @@
---
title: Configure AWS Elastic Beanstalk Docker environment variables
date: 2018-04-23 12:00:00
tags: [aws, docker]
---
AWS Beanstalk is a good 'intermediate' level hosting for Docker containers. It gives you load balancing and scalability pretty much out of the box in exchange for being a bit more opaque to configure. The Docker bits are a bit more hidden away there.
In a typical production setup you would want to have Docker images not containing anything environment related, e.g. to be able to run them both in production and locally. An easy way to achieve that with Docker is via environment variables. On the local environment it's `docker run --env NAME=VALUE` - what would be a Beanstalk equivalent though ?
It turns out that Beanstalk has a magical configuration directory structure that you can pass to an environment.
It goes like this:
```
configuration.zip
Dockerrun.aws.json
.ebextensions/
environmentvariables.config
```
Where `Dockerrun.aws.json` is your regular Docker definition file for Beanstalk, can look like this:
```
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "image:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "1234"
}
]
}
```
While `.ebextensions/environmentvariables.config` is where, well, you set the environment variables that will be defined in the container. Example:
```
option_settings:
- option_name: ENV_VAR1
value: "some value"
- option_name: ENV_VAR2
value: "some other value"
```
But wait, there's more ! Get the zip file and upload it to some S3 bucket, I'm going to assume that the file is at `BUCKET_NAME/CONFIG_PATH` in the example below.
Then you need to tell Beanstalk where the file is located. This can be achieved by creating a new application version:
```
aws elasticbeanstalk create-application-version --application-name APPLICATION_NAME --version-label VERSION --source-bundle S3Bucket=BUCKET_NAME,S3Key=CONFIG_PATH
aws elasticbeanstalk update-environment --environment-name ENVIRONMENT_NAME --version-label VERSION
```

View file

@ -0,0 +1,39 @@
---
title: Waiting for AWS Elastic Beanstalk environment to become ready
date: 2018-04-23
tags: [aws]
---
Elastic Beanstalk on AWS seems to be one of those services that are pretty cool but it's hard to get to know them.
One of the tasks you may encounter while working with it is that after making some change to its configuration you would like to wait for it to be finished before proceeding further. The change may be setting an environment variable but can also be deploying a new version of the application. I created a small bash script to help with that, can be useful when you try to run this process unattended, e.g. from CI.
```bash
#!/bin/bash
set -e
set -o pipefail
application_name=$1
environment_name=$2
timeout_seconds=$3
function getStatus() {
echo `aws elasticbeanstalk describe-environments \
--application-name $application_name --environment-name $environment_name |\
jq -r '.Environments | .[]?' | jq -r '.Status'`
}
sleep_time_seconds=5
max_iterations_count=$(($timeout_seconds/$sleep_time_seconds))
iterations=0
echo "Waiting for a maximum of $timeout_seconds seconds for $environment_name to become ready"
status=$(getStatus)
while [[ ( $status != "Ready" ) && ( $iterations -lt $max_iterations_count ) ]]; do
status=$(getStatus)
echo $status
sleep $sleep_time_seconds
((iterations+=1))
done
```
Happy coding !

View file

@ -0,0 +1,22 @@
---
title: Im running Rust pair programming sessions !
date: 2018-05-20
tags: [rust]
---
Why ? Rust has such a wonderful community and I want to give back as much as I can.
I am not an expert in Rust but I am not a beginner either. In addition to that I love pair programming !
The result is always much better than I could produce myself. I am happy to both share the knowledge and learn.
I would love to pair with you !
If youre a new Rustacean, fresh to the language - come on in ! If youre an expert - welcome !
We can work on any of the following:
- Any project of yours !
- Contribute back to a larger open source project (I am a contributor to e.g. [cargo](https://github.com/rust-lang/cargo/), [rustc](https://github.com/rust-lang/rust/) and [rustup](https://github.com/rust-lang-nursery/rustup.rs/))
- A project of mine - e.g. [genpass](https://github.com/cyplo/genpass)
Ping me an [email](mailto:rust@cyplo.dev) to schedule a session - can be a remote one or in person somewhere in London.
Thank you !

View file

@ -0,0 +1,27 @@
---
title: Resources for starting your adventure with Rust
date: 2018-05-22 00:00:00
tags: [rust]
---
As I've been running several intro to Rust sessions throughout the last year, I've assembled a set of resources that help people ease into the language.
Depending on your learning style you might like:
- [Rustlings](https://github.com/rustlings/rustlings) - This is a good set of starter exercises if you want to have a feeling for the language - have links to relevant book sections for each exercises so you can either start with the book or trying to figure it out yourself first. Ah, and it uses the Playground, which means you don't need to install anything on your machine to start.
- [The book itself](https://doc.rust-lang.org/stable/book/second-edition/index.html) - Second edition. Good when you want a solid baseline understanding of the language first.
- [Rust by example](https://doc.rust-lang.org/stable/rust-by-example/) - An set of examples that are runnable within a browser, intertwined with explanatory prose.
- [Exercism](http://exercism.io/)s Rust exercises - a CLI app that guides you through exercises of increasing difficulty.
- [IntoRust](http://intorust.com/) - A set of short screencasts for the foundational topics.
Make sure to stay up to date with:
- [This week in Rust](https://this-week-in-rust.org/)
- [Awesome Rust](https://rust.libhunt.com/)
And finally:
- [contribute back](https://www.rustaceans.org/findwork/starters) !
- don't forget to join the [user forums](http://users.rust-lang.org/) for the warm welcome.
If you'd like someone to ask questions to or pair program with, [ping me an email](mailto:rust@cyplo.dev).

View file

@ -0,0 +1,131 @@
---
title: Rust - controlling side effects from the test.
date: 2018-07-30 00:00:00
tags: [rust, testing, tdd]
series: rust-testing-tricks
---
# Rust: controlling side effects from the test.
Hello and welcome to the newest episode on testing in Rust.
Imagine you want to write a timestamping repository of some sorts, that will associate the timestamp of when the storage operation was invoked with the stored value.
How to write it in Rust ? And more importantly - how to test it ?
I would like to share a solution I found and talk a bit about how it works.
Please note that this solution can be used anywhere where you need to pass a handle that is remembered by the production code, and that thing it points to - you then want to change from the test.
```rust
trait Clock {
fn now(&self) -> Instant;
}
struct SystemClock;
impl SystemClock {
fn new() -> Self {
SystemClock {}
}
}
impl Clock for SystemClock {
fn now(&self) -> Instant {
Instant::now()
}
}
struct TimestampingRepository<'a, ClockType>
where
ClockType: Clock + 'a,
{
clock: &'a ClockType,
storage: Vec<(Instant, u32)>, // (timestamp, value)
}
impl<'a, ClockType> TimestampingRepository<'a, ClockType>
where
ClockType: Clock + 'a,
{
fn with_clock(clock: &'a ClockType) -> Self {
TimestampingRepository {
clock,
storage: vec![],
}
}
fn store(&mut self, value: u32) {
self.storage.push((self.clock.now(), value));
}
fn all_stored(&self) -> Vec<(Instant, u32)> {
self.storage.clone()
}
}
#[cfg(test)]
mod should {
#[test]
fn handle_seconds() {
let clock = FakeClock::with_time(Instant::now());
let mut repository = TimestampingRepository::with_clock(&clock);
repository.store(1);
clock.move_by(Duration::from_secs(32));
repository.store(2);
let time_difference = time_difference_between_two_stored(repository);
assert_eq!(32, time_difference.as_secs());
}
struct FakeClock {
now: Instant,
move_by_secs: AtomicUsize,
}
impl FakeClock {
fn with_time(now: Instant) -> Self {
FakeClock {
now,
move_by_secs: AtomicUsize::new(0),
}
}
// WAT no `mut`
fn move_by(&self, duration: Duration) {
self.move_by_secs
.store(duration.as_secs() as usize, Ordering::SeqCst);
}
}
impl Clock for FakeClock {
fn now(&self) -> Instant {
let move_by_secs = self.move_by_secs.load(Ordering::SeqCst) as u64;
self.now + Duration::from_secs(move_by_secs)
}
}
}
```
That's a lot of code. And I already skipped `use`s and some definitions to make it less.
If you want to get the full source code that to follow along - try this [playground](https://play.rust-lang.org/?gist=7f47c441732b543a918cb491487196d4&version=stable&mode=debug&edition=2015) or [this repo](https://github.com/cyplo/rust-dependency-injection) for the full project including production code usage.
Let's start with the test itself.
The clock appears to be immutable (immovable) in the test, yet we call `move_by` on it and the whole thing appears to be working somehow.
First question: can't we just make the clock mutable and skip all this ?
It appears that sadly (but fortunately) Rust prevents us from doing so.
We [cannot](https://doc.rust-lang.org/book/second-edition/ch04-02-references-and-borrowing.html) both have a immutable and mutable borrow of the clock in the same scope.
For the full example with an error go [here](https://play.rust-lang.org/?gist=3e496f857f1e016c596ec7c4060538df&version=stable&mode=debug&edition=2015).
What is this sorcery then ?
We use a type that provides [`Interior Mutability`](https://doc.rust-lang.org/book/second-edition/ch15-05-interior-mutability.html), namely [`AtomicUsize`](https://rust-lang-ja.github.io/the-rust-programming-language-ja/1.6/std/sync/atomic/struct.AtomicUsize.html).
On the outside - it look immutable, yet it provides a thread-safe and very narrow method of mutating the underlying state.
As we trust `AtomicUsize` to be written correctly, we can then proceed and write our Rust code as usual, relying fully on the borrow checker. Rust compiler is happy and our test code is happy.
I wouldn't use this as a pattern in production code - the borrow checker rules are there for a reason.
Please treat it as an escape hatch to be used in specific situations, situations like this.
Happy Rusting !
p.s. if you'd like to chat about Rust - [ping me an email](mailto:rust@cyplo.dev) !

View file

@ -0,0 +1,63 @@
---
title: Testing tricks in Rust
date: 2018-09-25 00:00:00
tags: [rust, tdd, testing]
series: rust-testing-tricks
---
# Use verbs as test module names
Who said that the test module needs to be named `test` ?
Experiment with different module names, pay attention to how the test runner displays the results.
A structure that I like, an example:
`worker.rs:`
```rust
// some production code here
mod should {
#[test]
fn consume_message_from_queue() {
// mock queue, create worker with that queue injected
// start worker
// check if queue's 'get_message' was invoked
}
}
```
Cargo prints `worker::should::consume_message_from_queue` when running this test, which reads nicely and exposes the requirement.
# Interior mutability for controlling state of variables injected from the test
Use e.g. the `atomic` types family or `RefCell` itself to get an immutable handle to a internally mutable data.
Useful when you don't want your production code to accept anything that can mutate but you still want to control that value from the test.
See injecting the system clock example in [my previous blog post](/posts/2018/07/rust-injection.html).
# Write the test first
Not really a Rust trick, but hey.
Try writing your test first, before production code.
If you're building a feature or fixing a bug that will affect external behaviour - start with an integration test at the crate level.
Try thinking what would be the ideal code you would like to interact with, what would be the types, what would be the functions ? A broad-strokes scenario, not caring much about implementation details, not caring much about covering all edge cases. Write that code. It does not compile. But it looks nice, you're pleased.
Read through again, add assertions. Add the types. For each missing feature or a bug that is present in this high level scenario - write a unit test. Satisfy that test with changes to production code. Maybe refactor a bit in between. Once the big test is green - you're done !
There is no Rust-focused TDD book just yet for me to recommend, but here, have some for other languages:
- Kent Beck - Test Driven Development: By Example
- Steve Freeman, Nat Pryce - Growing Object-Oriented Software, Guided by Tests
Rust allows for more cool tricks and generally writing less test code than mentioned in these books, so please use your judgment - and the tricks from this article !
# Let's talk !
Have any questions ? Would like to pair on Rust ? Curious about TDD ? Ping me !
Email is good - [hello@cyplo.net](mailto:hello@cyplo.net) or try [Twitter](https://twitter.com/cyplo).
thanks !

View file

@ -0,0 +1,105 @@
---
title: Generate Rust tests from data files
date: 2018-12-25
tags: [rust, tdd, testing]
series: rust-testing-tricks
---
Sometimes you just have a bunch of example data laying around and you want to make sure your code works with all of them.
Some of them are probably short and sweet and could live happily as [doctests](https://doc.rust-lang.org/rustdoc/documentation-tests.html), which are amazing btw. But some of them are more awkward to present in such form, because, for example, of their size or number. Typically when you have an example of how the program should behave you write an example-based unit test. Ideally, each of them would represent an isolated example and they should fail independently. But, converting your source data files into a unit test one by one, manually, can be a bit tedious.
[Rust build scripts](https://doc.rust-lang.org/cargo/reference/build-scripts.html) to the rescue !
What if you could could just iterate over the data files you have already and then produce unit tests accordingly ?
What follows is an example of such, where we iterate over directories and generate one unit test per each, assuming all of them contain files named according to our convention.
I chose to generate integration tests here, but you can generate pretty much any code using this technique.
`tests/test_loader.rs`
```rust
// include tests generated by `build.rs`, one test per directory in tests/data
include!(concat!(env!("OUT_DIR"), "/tests.rs"));
```
`build.rs`
```rust
use std::env;
use std::fs::read_dir;
use std::fs::DirEntry;
use std::fs::File;
use std::io::Write;
use std::path::Path;
// build script's entry point
fn main() {
let out_dir = env::var("OUT_DIR").unwrap();
let destination = Path::new(&out_dir).join("tests.rs");
let mut test_file = File::create(&destination).unwrap();
// write test file header, put `use`, `const` etc there
write_header(&mut test_file);
let test_data_directories = read_dir("./tests/data/").unwrap();
for directory in test_data_directories {
write_test(&mut test_file, &directory.unwrap());
}
}
fn write_test(test_file: &mut File, directory: &DirEntry) {
let directory = directory.path().canonicalize().unwrap();
let path = directory.display();
let test_name = format!(
"prefix_if_needed_{}",
directory.file_name().unwrap().to_string_lossy()
);
write!(
test_file,
include_str!("./tests/test_template"),
name = test_name,
path = path
)
.unwrap();
}
fn write_header(test_file: &mut File) {
write!(
test_file,
r#"
use crate_under_test::functionality_under_test;
"#
)
.unwrap();
}
```
`tests/test-template`
```rust
#[test]
fn {name}() {{
let input = include_str!("{path}/input-data");
let expected_output = include_str!("{path}/output-data");
let actual_output = functionality_under_test(input);
assert_eq!(expected_output, actual_output);
}}
```
So to recap - first the `build.rs` script creates `$OUT_DIR/tests.rs` file containing all the generated tests code.
The compiler does not know there are tests to launch using normal integration tests procedure there though,
so then we use `tests/test_loader.rs` to tell it so, basically including the generated Rust code into that file.
After the compilation proceeds normally, giving us one unit test per directory, giving us ability to pinpoint test cases that are problematic more precisely.
You can then further improve on that, e.g. add more directory structure, split tests into modules etc - you can generate any Rust code this way.
Happy hacking !
p.s.
there are more [Rust testing tricks](/series/rust-testing-tricks) and let me know if you'd like to [pair program with me on anything !](/posts/2018/05/pair-with-me-on-rust)

View file

@ -0,0 +1,35 @@
---
title: Website refresh
date: 2019-04-09
tags: [nikola, hugo, blog]
---
Hello !
As you may have noticed - this website looks different now ! Why is that ? I've not only changed the visual theme but also a lot of underlying infrastructure.
Let's start with describing the old setup and see where we can improve.
The site previously ran on [Nikola](https://getnikola.com/), was built on [Travis](https://travis-ci.org/cyplo/blog) and then pushed to [Netlify](https://www.netlify.com/), which I later changed to [Github Pages](https://pages.github.com/). While it worked it had some issues of its own;
1. Image galleries were hard to navigate and looked a bit out of place.
2. The site was not rendering the main menu correctly on mobile devices.
3. It was slow to build the site, especially on CI - it took 15 to 30 minutes for the CI run on travis to get the site deployed - not ideal for fast feedback
4. Lack of previews for the work in progress - it was hard for me to set up a preview per branch, updated automatically.
All of the above, mixed with me having a bit of a time off, resulted in getting this site migrated to some new and exciting stack !
It now runs on [Hugo](https://gohugo.io/) and uses Netlify as the deployment target. I fixed all of the problems above and added some more niceties !
1. I'm using [Beautiful Hugo](https://themes.gohugo.io/beautifulhugo/)'s Gallery now - see this [old post](/posts/2016/06/12/dont-throw-away-fix/) for an example
2. This theme also makes the site mobile-friendly out of the box
3. It takes 1-3 minutes for the full site deploy
4. Previews just work :)
5. No need to use [custom Netlify sync script I wrote](/posts/2017/03/netlify-incremental-deployer/) anymore, as Hugo is natively supported on Netlify
6. Netlify supports TLS on multiple domains - you can now use [cyplo.dev](https://blog.cyplo.dev/) as well as [cyplo.net](https://blog.cyplo.net) to reach this site :)
As for how the actual migration was done - it was a bit of work, as you can see in this [PR](https://github.com/cyplo/blog/pull/46). While I could migrate the content mostly automatically, I chose to actually use this opportunity to review all the posts manually !
This allowed me to spot and fix some additional issues, like broken links, editorial mistakes etc.
Overall I think this was worth it - the site is now more modern, the sources are smaller and it is fast to build.
If you're curious about the details - all the sources are [here](https://github.com/cyplo/blog).
Happy blogging !

View file

@ -0,0 +1,47 @@
---
title: Fuzzers and how to run them.
date: 2019-04-17
tags: [fuzzing, rust, testing]
---
I am fascinated by the concept of fuzzing. It fits well with my desire to test weird code paths by using more of computer's time and less that of a programmer.
## What is fuzzing ?
It's a type of automated testing, especially good with finding edge cases in your code. It runs totally outside of your code and knows nothing about it - it just throws random data at it. Modern fuzzers instrument your code to be able to tell if by changing input they change the code paths covered and by doing that they try to achieve maximum coverage. While this sounds like a very naive approach it can lead to finding incredibly interesting bugs. For that reason fuzzers are oftentimes used in the security community for finding vulnerabilities. Fuzzing is a type of a black box testing - but how you define that box is entirely up to you. It can be the whole program, it can be a single function. It just needs a clear entry point with ability to manipulate input data.
An example may be in order. We'll look into one provided by [Rust Fuzzing Authority](https://github.com/rust-fuzz) - people behind [cargo fuzz](https://fuzz.rs/book/cargo-fuzz.html) and [afl.rs](https://fuzz.rs/book/afl.html).
Imagine you're trying to see if you've handled all cases in your url parser. You can just do
```rust
fuzz_target!(|data: &[u8]| {
if let Ok(s) = std::str::from_utf8(data) {
let _ = url::Url::parse(s);
}
});
```
And that's it ! You define a function that takes raw bytes, discards the ones that are outside of a UTF8 string space and then tries to invoke the function under test with the said string. We assume that if this function completes then the run is considered ok, if it panics then not.
I wanted to use this example because it is very typical of how a fuzzer would be used.
1. get random input from the engine
2. discard some of the input
3. drive the function under test
4. check some simple property of the run - the faster the property to check the better. For this reason - the 'did not crash' property is used often, however you can use any condition you want, as long as fuzzer has a way of distinguishing between successful and failed runs.
## How does it compare to property-based tests ?
When thinking about other types of tests that are driven by randomness and are generally black-box-ish - property-based tests come to mind. How does fuzzing compare ?
- Fuzzing is good with finding hidden properties of the existing code, it used after the code is written, while property-based tests can also be used in a TDD workflow
- In property-based tests you typically want to describe behaviour - they serve as good documentation. Fuzzing is not really as good with that. It can automatically collect the examples though, that then can be manually transformed into a property.
- As you want your program to always hold all properties true you run all property-based tests within your CI/CD pipeline. For this reason they tend to test only things that are relatively fast to check, execution-time wise. Fuzzers are typically ran over nights or weeks - finding those edge cases, providing more properties for you to know and care about.
## Want help with fuzzing your open source project ?
If you're not sure what fuzzing is, or you want help with adding fuzzer support to your open source project - just tell me ! Either by [email](mailto:fuzzing@cyplo.dev) or on [Twitter](https://twitter.com/cyplo). I'm happy to jump on a call, pair program or whatever you fancy ! I know the most about the Rust fuzzing space, but happy to help with other languages as well - this allows me to learn :)
I also run a fuzzing server where I can test your project overnight and see if we find anything.
With that in mind - happy fuzzing !

View file

@ -0,0 +1,75 @@
---
title: My 2019 setup
date: 2019-11-16
series: my-setup
tags: [tools]
---
Here's a list of software and some hardware I find useful, either things that I use daily or things that make an unusual task pleasant instead of incredibly difficult.
This is constantly evolving, so please mind the publish date of this post to gauge how dated it is. The best place to get most recent settings I currently use is my [dotfiles](https://github.com/cyplo/dotfiles) repository, especially the [nixos](https://github.com/cyplo/dotfiles/tree/master/nixos) configuration.
## Laptop and OS
My daily driver is [Thinkpad T480](https://www.thinkwiki.org/wiki/Category:T480#Lenovo_ThinkPad_T480) running [NixOS](https://nixos.org/). I like the hardware for its matte screen, nice keyboard and general sturdiness. It's also black and black is nice.
The device-specific config lives [here](https://github.com/cyplo/dotfiles/blob/master/nixos/boxes/foureighty.nix).
I use [home manager](https://github.com/rycee/home-manager) to manage my dotfiles.
I still haven't found a good way of having all this setup ported exactly to non-NixOS operating systems.
Within the NixOS land I'm all set up now though, as an example -
I have an older laptop that I used previously, and installing NixOS there took total of around half an hour, giving me **exactly** the same look and feel to all the software I run as I'm used to.
NixOS's ability to boot into a previous version of the whole system is another big plus to me, as I like to experiment with my settings and sometimes I break something important. It creates an environment that does not punish you for trying something new, it encourages that.
## Networking gear
I had a lot of problems with network speed over wifi previously, because of that I bought a Netgear [Nighthawk X4S R7800](https://www.netgear.co.uk/home/products/networking/wifi-routers/R7800.aspx) router and installed [OpenWRT](https://openwrt.org/) on it. After all this change, I am able to run very frequent multi-gigabyte backup transfers over wifi and everything works nicely.
## Graphical interface
I started running [i3](https://i3wm.org/) recently because of the strain Gnome3 was putting on my system, and I am liking it so far, mostly for the speed of navigation and little resource usage. There's [polybar](https://github.com/polybar/polybar) on top and not much else in terms of bells and whistles present.
I use [autorandr](https://github.com/phillipberndt/autorandr) to keep track of different display devices on different machines - it will automatically set the best resolution for whatever the screen combination I am currently using. See [config](https://github.com/cyplo/dotfiles/blob/master/nixos/user-xsession.nix) for more details.
[Firefox](https://www.mozilla.org/en-GB/firefox/new/) remains my browser of choice, I highly recommend you try it, it is so much faster now than it used to be. Make sure to switch the tracking protection on.
## Secrets management
I have a [veracrypt](https://www.veracrypt.fr/en/Home.html) encrypted container, where my secrets reside, with a small set of scripts to [mount](https://github.com/cyplo/dotfiles/blob/master/tools/mount-vault) and [unmount](https://github.com/cyplo/dotfiles/blob/master/tools/umount-vault) it. The container is synced between different machines using [syncthing](https://syncthing.net/).
Inside the container, among other things, there is a [password store](https://www.passwordstore.org/) directory, which I use from either command line or from Firefox, using [this plugin](https://github.com/passff/passff)
## Sync
[Syncthing](https://syncthing.net/) just keeps working, no matter how many devices I attach and what is their configuration. I run it on all my machines, including mobile devices and it just works. This is how I keep all my documents, photos and other data always fresh between all devices. Just make sure to encrypt the data at rest when using it, you don't want to sync to a device which someone else can take from you and read all the data off of.
Sync is also not a replacement for backup, as file deletions and corruption can spread easily across your fleet.
## Backups
Here is where I am not that happy with the overall setup.
Currently I use [restic](https://restic.net/) to encrypt the backup and then ship it off of individual machines to my central NAS storage. Then from there it is being shipped to Backblaze's [b2](https://www.backblaze.com/b2/cloud-storage.html) for off-site storage.
In principle, the setup I would like to retain, where the encryption credentials are only on machines creating backups and everything else only sees already encrypted files. In practice, restic itself seems to have a lot of troubles with the source machines being laptops and being constantly opened and closed, caused the running backup process to go through hibernation cycles. This locks/damages the central backup repo quite frequently and I need to run `restic rebuild-index` quite often to keep things working.
For this reason I started working on [bakare](https://github.com/cyplo/bakare), a small backup engine in Rust - [let me know](mailto:bakare@cyplo.net) if you would be interested in collaborating with me on it.
## Editors
I use a combination of [vim](https://www.vim.org/), [VSCode](https://code.visualstudio.com/) and different [JetBrains'](https://www.jetbrains.com/) IDEs for work. I like IDEs mostly for refactoring and debugging capabilities, while vim and VSCode for speed of editing individual files. I still use vim-mode in IDEs though.
## Fonts
I settled on [Fira Code Retina](https://github.com/tonsky/FiraCode) for most of my programming and terminal needs.
## Terminal
I find [termite](https://github.com/thestinger/termite) quite fast, while supporting extended character and colour sets.
My shell is [zsh](https://www.zsh.org/) with minimal [oh-my-zsh](https://github.com/robbyrussell/oh-my-zsh) [config](https://github.com/cyplo/dotfiles/blob/master/nixos/programs/zsh.nix). I always run it inside a [tmux](https://github.com/tmux/tmux/wiki) session though, and no matter how many terminal windows I open, I am always greeted with the same state. All history and window state is shared between all terminal windows all tmux windows as well - it is always the same one tmux session. Because I am always running tmux, sometimes I end up in a situation when I ssh into some box and need to run tmux there - for that reason I have my main tmux session having different leader key than the default, this way I can choose which tmux session will receive my command - my machine or the one I'm connecting to. All of the above comes from a combination of [shell](https://github.com/cyplo/dotfiles/blob/master/nixos/programs/zsh.nix) and [tmux](https://github.com/cyplo/dotfiles/blob/master/nixos/programs/tmux.nix) settings.
Here's a small collection of other tools I found help a lot when on the terminal:
- [ripgrep](https://github.com/BurntSushi/ripgrep) - it is just so much faster than grep
- [fd](https://github.com/sharkdp/fd) - same but for `find`
- [bat](https://github.com/sharkdp/bat) - a cooler `cat`
- [genpass](https://crates.io/crates/genpass) for generating passwords
- [z.lua](https://github.com/skywind3000/z.lua) for faster navigation
And that's it !
I hope you'll find this list useful and do not hesitate to [contact me](mailto:blog@cyplo.net) on `blog@cyplo.net` if you would have any questions or comments. Happy hacking !

View file

@ -0,0 +1,8 @@
---
title: Death and Life of American Cities
date: 2019-04-09
tags: [book]
draft: true
---
review

View file

@ -0,0 +1,6 @@
---
title: Add a concept of post series in Hugo
date: 2019-04-09
tags: [hugo]
draft: true
---

View file

@ -0,0 +1,8 @@
---
title: Migrating laptop workstation to to Nixos
date: 2019-04-09
tags: [linux, nixos]
draft: true
---
Nixos

View file

@ -0,0 +1,4 @@
pairing sessions update
- using genpass
- fuzzing

View file

@ -0,0 +1,38 @@
---
title: Buttermilk cornbread
date: 2020-10-10
tags: [baking]
---
I like baking, as it can be something that does not involve any computers.
Please have this recipe for cornbread, based on buttermilk and baked in a skillet.
As for baking goes, this recipe is quite forgiving and you don't need to watch ratios as closely as with others.
## Ingredients
This is for a 20cm (8") skillet, which gives you enough bread for you and one other person to enjoy over a weekend. For a 25cm (10") skillet - double the amounts. It's best to use a cast iron skillet as it has enough thermal mass to take pouring cold batter into it and still be reasonably hot afterwards. Flour and cornmeal volume measurements are for flour poured into a cup.
```
1/2 tablespoon vegetable oil
300ml cornmeal
150ml all-purpose flour
1/2 teaspoons salt
1 teaspoons baking powder
1/2 teaspoon baking soda
300ml buttermilk
2 eggs
2 tablespoons melted butter
```
## Steps
- Put the veggie oil in the skillet, put the skillet in the oven, preheat to 200C. This takes a while and gives you time to prepare the batter. The oil shouldn't smoke.
- Combine all dry ingredients in a large bowl, mix well.
- In another bowl whisk eggs and then melted butter into the buttermilk. No need for any extensive whisking, just to combine well.
- Make sure your oven came to temp.
- Pour wet ingredients into the dry, mix well. Acid from the buttermilk will start to react with bases from the baking soda and powder leavening the mixture. Do not overmix and work reasonably quickly, to not lose the bubblage. The resulting batter should be thick, barely pourable.
- Take the hot skillet out of the oven, make sure the oil is spread evenly.
- Pour the batter onto the hot skillet, most of it should pour but the last of it you may need to transfer using a spoon. Even out the surface, but don't be too pedantic about it.
- Put the skillet back into the oven, reduce heat to 185C
- Bake for around 20-30 minutes, it does not burn easily, you can put it under a broiler for few minutes at the end if you want more brownage. The time needed does not change as much with the volume.
- After baking you can keep it in the skillet, or take it out. It does not need to release much moisture, but also should come out pretty easily when fully baked.

View file

@ -0,0 +1,80 @@
---
title: Don't let failures spread over your suite with process-based tests isolation
date: 2020-12-28
series: rust-testing-tricks
tags: [rust, testing]
---
Being able to precisely control what failures in underlying systems occur and at what time can be really useful in achieving a fast and stable test suite. While I am a big proponent of dependency inversion and being able to control dependencies via the explicit injection points in your API, sometimes it's impractical to do so. This is where [`fail`](https://crates.io/crates/fail) can help us immensely, providing an escape hatch for situations like those as it allows to inject failures into previously defined failure points.
It comes at a price though. If you would mix your other unit tests and tests activating fail points you will notice some unexpected failures in the test suite. As `cargo test` runs tests in parallel by default, the tests activating a fail point can interfere with another test that did not want that fail point active at all that is ran at the same time. The crate authors [recommend](https://docs.rs/fail/#usage-in-tests) running all of the tests using fail points in a separate executable and using `FailScenario` to serialise test execution.
There is another way, that I found simpler for the way I write tests, if you allow for yet another helper crate. We can run each test in a separate process, effectively isolating it from the rest, stopping failures from spreading.
Let's take a look at an example from [`bakare`](https://git.sr.ht/~cyplo/bakare) - my experiment in writing a backup system.
`cargo.toml`
```toml
[dependencies]
fail = "0.4"
[dev-dependencies]
two-rusty-forks = "0.4.0"
[features]
failpoints = [ "fail/failpoints" ]
```
`lock.rs`
```rust
/// this function is called from `Lock::lock()`
fn create_lock_file(lock_id: Uuid, index_directory: &VfsPath) -> Result<()> {
...
fail_point!("create-lock-file", |e: Option<String>| Err(anyhow!(e.unwrap())));
let mut file = lock_file_path.create_file()?;
...
}
mod must {
use super::Lock;
use anyhow::Result;
/// only import the macro when `failpoints` feature is enabled
#[cfg(feature = "failpoints")]
use two_rusty_forks::rusty_fork_test;
use vfs::{MemoryFS, VfsPath};
#[test]
/// this is a normal unit test
/// we don't want for it to be affected by the fail points being active
fn be_released_when_dropped() -> Result<()> {
let temp_dir: VfsPath = MemoryFS::new().into();
{
let _lock = Lock::lock(&temp_dir);
}
let entries = temp_dir.read_dir()?.count();
assert_eq!(entries, 0);
Ok(())
}
#[cfg(feature = "failpoints")]
rusty_fork_test! { /// use the macro to create a separate process for this test
#[test]
fn be_able_to_lock_when_creating_lock_file_fails_sometimes() {
/// activate the fail point
fail::cfg("create-lock-file", "90%10*return(some lock file creation error)->off")
.unwrap();
let path = MemoryFS::new().into();
let lock = Lock::lock(&path).unwrap();
lock.release().unwrap();
}
}
...
}
```

View file

@ -0,0 +1,118 @@
---
title: My 2020 setup
date: 2020-12-29
series: my-setup
tags: [tools]
---
Hello and welcome to the second edition of me doing a summary of the year of using tech tools.
Here's a list of software and some hardware I find useful, either things that I use daily or things that make an unusual task pleasant instead of incredibly difficult.
This is constantly evolving, so please mind the publish date of this post, please also check the [last year's installment](https://blog.cyplo.dev/posts/2019/11/tools/) as this article will make references to the previous setup.
## Laptop, OS and other work hardware
- [Thinkpad T480](https://www.thinkwiki.org/wiki/Category:T480#Lenovo_ThinkPad_T480), the device-specific config lives [here](https://git.sr.ht/~cyplo/dotfiles/tree/master/item/nixos/boxes/foureighty/default.nix).
- CalDigit TS3 Plus usb-c hub
- Iiyama ProLite 27" 4K monitor
- IKEA BEKANT motorised standing desk
No dramatic changes here since the [last year](https://blog.cyplo.dev/posts/2019/11/tools/), I'm still on [NixOS](https://nixos.org/) on the T480. Thinkpad got a RAM upgrade to 48GiB and is handling it well, despite it being above it stated memory limit. I've also managed to get my `home-manager` config [called](https://git.sr.ht/~cyplo/dotfiles/tree/83ddcc09dc68389b129d598722eca9e90a6dff33/item/nixos/boxes/foureighty/default.nix#L33) from the main system configuration, so now I just need to do `sudo nixos-rebuild switch --upgrade` once and it does my `home-manager` setup as well. This allowed me to split the config into modules a bit better.
I'm really happy with the standing desk as it allows me to switch from sitting to standing and back very quickly.
For the USB-C hub, I switched from the StarTech one I had before and this one is much more stable now.
### TODOs for 2021:
- to look into [`flakes`](https://nixos.wiki/wiki/Flakes) and see if I want to port my config over to that style.
## Networking
- [Turris Omnia](https://www.turris.com/en/omnia/overview/)
- [Devolo Magic 2 WiFi](https://www.devolo.co.uk/magic-2-wifi) powerline adapters
While I stopped having WiFi speed problems with the purchase of the Netgear R7800 router, it started dropping packets in the summer. I suspect it started overheating, whether it was solely a problem with the hardware itself or was also influenced by where the router was placed and the airflow avaiable I don't know. As I needed this solved quickly I bought Turris Omnia as a replacement and am quite happy with it.
### TODOs for 2021:
- debug the overheating problem
- try to port [NixWRT](https://github.com/telent/nixwrt) to run on R7800
## Graphical interface
- [i3](https://git.sr.ht/~cyplo/dotfiles/tree/83ddcc09dc68389b129d598722eca9e90a6dff33/item/nixos/i3)
- [grobi](https://git.sr.ht/~cyplo/dotfiles/tree/83ddcc09dc68389b129d598722eca9e90a6dff33/item/nixos/i3/grobi.nix)
Running `grobi` now instead of `autorandr` but otherwise the config seems to be stabilising.
## Secrets management
- [veracrypt](https://www.veracrypt.fr/en/Home.html) + [syncthing](https://syncthing.net/)
- [password store](https://www.passwordstore.org/) + [passff](https://github.com/passff/passff)
- [bitwarden](https://bitwarden.com/)
I have a `veracrypt` encrypted container, where my secrets reside, with a small set of scripts to [mount](https://git.sr.ht/~cyplo/dotfiles/tree/83ddcc09dc68389b129d598722eca9e90a6dff33/item/tools/mount-vault) and [unmount](https://git.sr.ht/~cyplo/dotfiles/tree/83ddcc09dc68389b129d598722eca9e90a6dff33/item/tools/umount-vault) it. The container is synced between different machines using `syncthing`.
Inside the container, among other things, there is a `password store` directory, which I use from either command line or from Firefox.
For when I need to share a secret I use `bitwarden` as it allows for that in a quite an easy way.
## Sync
[Syncthing](https://syncthing.net/) just keeps working, no matter how many devices I attach and what is their configuration. I run it on all my machines, including mobile devices and it just works. This is how I keep all my documents, photos and other data always fresh between all devices. Just make sure to encrypt the data at rest when using it, you don't want to sync to a device which someone else can take from you and read all the data off of.
Sync is also not a replacement for backup, as file deletions and corruption can spread easily across your fleet.
## Backups
- [restic](https://restic.net/)
Here is where I am not that happy with the overall setup, and not that much has changed from 2019.
Currently I use `restic` to package and encrypt the backup and then ship it off of individual machines to my central NAS storage. Then from there it is being copied to Backblaze's [b2](https://www.backblaze.com/b2/cloud-storage.html) for off-site storage.
I made some progress on [bakare](https://github.com/cyplo/bakare), a small backup engine in Rust, but it's not ready for production use yet. [Let me know](mailto:bakare@cyplo.net) if you would be interested in collaborating with me on it.
### TODOs for 2021:
- finish bakare ?
## Editors
- vim [configured via nix](https://git.sr.ht/~cyplo/dotfiles/tree/83ddcc09dc68389b129d598722eca9e90a6dff33/item/nixos/home-manager/programs/vim.nix)
- VSCode [also configured via nix](https://git.sr.ht/~cyplo/dotfiles/tree/83ddcc09dc68389b129d598722eca9e90a6dff33/item/nixos/gui/vscode.nix)
Not much of a change here.
## Fonts
I settled on [Fira Code Retina](https://github.com/tonsky/FiraCode) for most of my programming and terminal needs.
## Terminal
- [kitty](https://sw.kovidgoyal.net/kitty/) and [my config](https://git.sr.ht/~cyplo/dotfiles/tree/83ddcc09dc68389b129d598722eca9e90a6dff33/item/nixos/home-manager/programs/kitty.nix) for it
- [zsh](https://www.zsh.org/) + [config](https://git.sr.ht/~cyplo/dotfiles/tree/83ddcc09dc68389b129d598722eca9e90a6dff33/item/nixos/home-manager/programs/zsh.nix)
- [tmux](https://github.com/tmux/tmux/wiki) + [config](https://git.sr.ht/~cyplo/dotfiles/tree/83ddcc09dc68389b129d598722eca9e90a6dff33/item/nixos/home-manager/programs/tmux.nix)
While I still like `termite` and `alacritty`, I have switched to `kitty` lately as it is still quite fast while providing for some fancy shenanigans like graphics in the terminal. I'm not fully happy with my setup yet, as, for example, kitty sometimes requires restart for the graphics feature to work.
Here's a small collection of other tools I found help a lot when on the terminal:
- [newsboat](https://newsboat.org/) - a new one on the list, I find it providing a cool and relaxing way of reading RSS in the terminal, see [here](https://git.sr.ht/~cyplo/dotfiles/tree/83ddcc09dc68389b129d598722eca9e90a6dff33/item/nixos/home-manager/programs/newsboat.nix) for my config, including my subscriptions list
- [ripgrep](https://github.com/BurntSushi/ripgrep) - it is just so much faster than grep
- [fd](https://github.com/sharkdp/fd) - same but for `find`
- [bat](https://github.com/sharkdp/bat) - a cooler `cat`
- [genpass](https://crates.io/crates/genpass) for generating passwords
- [z.lua](https://github.com/skywind3000/z.lua) for faster navigation
### TODOs for 2021:
- make `kitty`'s image mode always work out of the box
## Code hosting
I'm trying to migrate off of GitHub, I'm trying out [sr.ht](https://sr.ht/) now. I like its simplicity and ease of setup, especially when it comes to the built-in CI.
## The end
And that's it !
I hope you'll find this list useful and do not hesitate to [contact me](mailto:blog@cyplo.net) on `blog@cyplo.net` if you would have any questions or comments. Happy hacking !

View file

@ -0,0 +1,77 @@
---
title: Using own OpenWRT router with Vodafone/Cityfibre
date: 2021-11-04
tags: [hardware, freedom]
---
For something very different this time, a quick recipe for a very specific situation.
One of the very few fiber-to-the-home solutions in the UK is through Cityfibre, sold as Vodafone "Gigafast" packages where I live. As with other ISPs they give you their own internet router. If you would like to use your own, here's how.
`TLDR; Obtain PPPoE credentials specific to your account from Vodafone, then use them to configure your router, you need to tag all your packets with VLAN 911.`
1. Obtain PPPoE credentials from Vodafone. You can do so by using the chat feature on their website. Here's an example chat transcript pasted here for your enjoyment and pleasure
```
Tobi : Im TOBi - Vodafones chatbot Ask me anything, Im here to help.
You : hi, I would like to swap the provider router for the own router, could you share the PPPoE credentials I would need to use ? thank you
Tobi : Please try asking your question again using fewer words, making sure you only ask one question at a time. Youll be able to ask other questions later if you need to.
You : ppoe credentials
Tobi : TOBi is thinking...
TOBi : Sorry, I didn't understand the question. Can you please try rephrasing?
You : own router ppoe credentials
TOBi : Sorry, I didn't understand the question. Can you please try rephrasing?
You : can I speak to a human please ?
TOBi : Could you tell me your full name please?
You : xxxxxxxxxxxxxxx
TOBi : Could you please tell me your phone number?
You : xxxxxxxxxxx
TOBi : Please tell me what type of account you have with us:
TOBi :
Pay Monthly
Pay As You Go
Small Business
Broadband
You : Broadband
Pragya : Hi , you're now connected to Broadband chat support. Please give me 1-2 minutes as I quickly go through your conversation so that you don't have to repeat yourself.
You : Hi, perfect, thank you :)
Pragya : Cyryl , Thank you for waiting
Pragya : As per the pre chat , I understand that you wish to have PPOE details , right , Cyryl ?
You : that's correct, I'm trying to establish connection using own hardware
Pragya : Cyryl , Thank you lets do a quick security check ,
Please share your broadband account number or home phone number once so I can locate your account and proceed further.
You : account number xxxxxxxxxx
Pragya : Thank you very much
Pragya : I have your account ready. Let's complete a quick security check so that I can try to resolve this for you.
Pragya : Confirm the 2nd and 3rd digit of the security PIn ?
You : xxxxxxx
Pragya : Thank you let me verify it
Pragya : Brilliant that matches the record
Pragya : Would you mind being online for few minutes, while I am checking the details ?
You : no problem, thank you
Pragya : Thank you , Moving away from the chat window or not responding for 8 mins may result in losing the chat. If it does, here's a link to get back to us faster than usual: https://www.vodafone.co.uk/bbservcd
Pragya : Thank you for waiting
Pragya : Here is the username - dsl123456789@broadband.vodafone.co.uk
Pragya : Password is - XXXXXXXX
```
2. armed with that knowledge you can configure your router, I'm gonna give an example of configuring an OpenWRT router via their GUI interface LuCI
* backup your settings
* -> Network->Interfaces
* note down the physical interface of current WAN interface - (`WAN`->Edit->Physical Settings), e.g. `eth2`
* remove existing `WAN` interface, press `save and apply`
* add a new PPPoE interface:
* Name: `WAN`
* Protocol: `PPPoE`
* interface - type in the `custom` field - name of your physical interface followed by a dot and `911` indicating the VLAN number to tag packets with - e.g. `eth2.911`
* -> create interface
* PAP/CHAP username: [username from Vodafone]
* PAP/CHAP password: [password form Vodafone]
* -> Firewall settings
* Create / Assign firewall-zone: `WAN`
* -> save
* -> save and apply
* wait around a minute for the router to connect

View file

@ -0,0 +1,73 @@
---
title: Quick reproducible zig dev env using nix flakes
date: 2022-01-01
tags: [zig, nix]
---
Want this great feeling of entering a directory and having all your dev setup done but can never remember how to set up nix flakes ?
What you'll need is, in addition to a working [direnv](https://direnv.net/) with `nix` and `nix flake` enabled, is:
`.envrc`:
```text
use flake
```
`.gitignore`:
```text
.direnv/
```
`flake.nix` (this `nixpkgs` hash points to the first revision with zig 0.9 present):
```nix
{
inputs = {
nixpkgs = {
type = "github";
owner = "nixos";
repo = "nixpkgs";
ref = "e1eeb53e64159fbb2610ba7810ed511e4d5c10ca";
};
flake-compat = {
url = "github:edolstra/flake-compat";
flake = false;
};
};
outputs = { self, nixpkgs, flake-compat }:
let pkgs = nixpkgs.legacyPackages.x86_64-linux;
in
{
devShell.x86_64-linux =
pkgs.mkShell {
buildInputs = with pkgs;[
nixpkgs-fmt
zig
zls
];
};
};
}
```
and some `shell.nix` for this extra bit of `nix-shell` compatibility, if you'd want it:
```nix
(import
(
let
lock = builtins.fromJSON (builtins.readFile ./flake.lock);
in
fetchTarball {
url = "https://github.com/edolstra/flake-compat/archive/${lock.nodes.flake-compat.locked.rev}.tar.gz";
sha256 = lock.nodes.flake-compat.locked.narHash;
}
)
{
src = ./.;
}).shellNix
```
Don't forget to:
* `git add` all of the above, otherwise nix flake operations might not work
* `direnv allow` this directory
* have fun !

Some files were not shown because too many files have changed in this diff Show more