Security and Stability
Stability & Uptime
The codeREADr platform has an uptime of more than 99.9%. In other words, our downtime is less than 4.38 minutes/month on average. We understand that your business depends on the uptime of our servers so we go to great lengths to keep them available, redundant and fast.
User Authentication & Permissions
Whether you’re on the website, or on the app, codeREADr requires you to go through an authentication process. The website account holder (known as the Administrator) must give all of the mobile app users unique usernames and passwords allow them access to the app. Once users are logged into the mobile application, the Administrator can set specific permissions for each user. In this way every user has access to only what they need, and nothing more.
Encrypted App Communication
When data travels from your mobile device to the our servers it is securely encrypted via TLS. This means that all of the data within your scans, such as its service type, user, device and location, go through this cryptographic protocol. This ensures that your data is safe, secure, and only accessible to an administrator with a valid username and password to the website.
Encrypted Website Communication
The login to CodeREADr.com is also encrypted by TLS. This means that all of the data transferred between us and their web browser when your administrator is viewing or downloading scans is encrypted. Also, just in case someone forgets to type in the “s” , we always redirect browsers from http:// to https:// so that any authentication, login or view of data between us is secure.
Encrypted API Communication
The API’s you can call to retrieve data or configure codeREADr in the cloud are also encrypted by TLS. The API utilizes token-based authentication and IP filtering to ensure it is only your server that is connecting to your information. You are able to revoke and reset your API keys at any time.
Replicated & Redundant Databases
With codeREADr, you should never worry about losing data. Not only is our database highly scalable, but if something did happen to the main database server, the data is synchronously replicated across multiple data centers. This means if one database server goes down, we have others waiting in standby mode to immediately take its place. These backup databases support an automatic failover feature, meaning that the system will switch from the failed database to the duplicate database without human intervention. There is no waiting for our IT guy to repair the connections.
Continuous Data Backup
We also continually backup snapshots of your data so that should a disaster occur we will be able to restore information from any point in time, any second, from the past three days. We also store weekly and monthly backups of our databases. We backup everything except barcode images which can simply be re-generated.
Database Technical Details:
Our database is synchronously replicated across multiple data centers and supports automatic failover. Therefore if there are any problems with our primary database server, we will automatically switch over to an replicated database within minutes and without human intervention. We are also able to apply patches and updates the database software without any downtime.
We conduct maintenance via the following steps:
- Perform maintenance on standby
- Promote standby to primary
- Perform maintenance on old primary, which becomes the new standby.
We continually back up our database, storing the backups for a defined retention period of 30 days. We support both Point-In-Time-Restore and Snapshot Restore. Our Point-In-Time-Restore allows us to specify any minute (except the previous 5 minutes) during the past 30 days and restore the data. We also perform automatic full daily snapshots of our Database and retain these copies for a month.
We also utilize an automated replication service for our file system. The service redundantly stores data in multiple facilities and on multiple devices within each facility. To increase durability, this system synchronously stores data across multiple facilities at the time of file creation. In addition, the service calculates a checksum on all network traffic to detect corruption of data packets when storing or retrieving data.