Project description
The goal of the project is to provide cheap and convenient solution to handle HAM Nets. Solution is built using AWS cloud services but can be ported to any other major cloud service provider with minimal changes. It consists of four main components:
- Scheduler
- Frontend. Login/status page
- Docker container to automate check-ins for keyups during the Net
- Backend. Net control station interface
Data flow diagram:
Scheduler
Scheduler (based on AWS Lambda) is handling enabling/disabling procedure for the frontend. Lambda triggers frontend deployment/removal (using AWS CloudFormation), then does additional DNS manipulations. At development time EventBridge wasn’t available.
Frontend
Includes login page and administrator view to setup up Net related information. Such as: Topic, Net Control station, etc.
Frontend has two states:
- passive - when there is no Net and it provides static disclaimer information
- active - when Net is going on, people are checking in, Net control station is managing the Net
In passive
mode it’s a static website (hosted on AWS S3) based on bootstrap
framework with disabled menus and website control elements. As a result, frontend is always reachable by users and provides minimum status information. Probably, the cheapest way to keep status page always online. So, there is no confusion that website is not reachable when there is no Net.
In active
mode it’s a Flask application which is hosted on EC2 (virtual machine) instance for the period of the Net. Minimal instance type is enough to handle the load. It has basic information and controls: Net topic, Net Control Operator, check-ins, etc. Instance is terminated when the Net is over. On-demand
instance type is used to avoid re-provisioning type of interruptions in case of Spot instance
type. Additionally, cost is very minimal and there is no need for additional complexity.
Docker container to automate check-ins for keyups during the Net
Solution is based on N4IRR PyTalk project. Includes MMDVM, AnalogBridge and custom Python script. After any keyup it takes Brandmeister metadata and tries to resolve operator details (name and location) in QRZ.com, RadioId or just use Brandmeister metadata. Just to make it easier and more personal for participants and Net Control Operator. Information is added to Google Sheet with AIR
tag in Origin
column.
Container can be hosted on the same instance
Backend
Backend logic is pretty simple. Each callsign can be resolved against one of two sources: QRZ.com or RadioID. By default, QRZ.com has small quota for daily callsign lookup. It’s enough to handle average size Net (around 100 participants)
After callsign is resolved it’s then added to in-memory table and sent to Google Sheets Document. Google Sheets is used as a persistent storage. It has several useful functions. Such as dynamically adding items as a last entity in table. It allows Net Control Operator to modify sheet on the go (i.e., delete duplicates, mark people that have been called back).
Document has a separate sheet which includes only unique callsigns. It’s convenient to use to for final callsign count or check if specific callsign was participating in the Net.
Video example from one of the Colorado HD Nets:
Outcomes
Overall solution cost was about 0.16$ for two Nets a month. Mainly paying for VM hours. S3 and Lambda were still under free invocations limit. Google Sheets API is also used way below free limits. System was able to work in autonomous mode and was convenient to use by the Network Control Operator.
Information sources list
Still need to check secrets and make repo public