Fans of the Turris Omnia router may be excited that its MOX modular network device is now available for general retail sale after completing a successful Indiegogo campaign.
The MOX system is built around the MOX Start module (EUR 169), which contains a dual-core 1 GHz Marvell Armada CPU and 512/1024 MB of DDR memory. Like its Omnia predecessor, the MOX runs Turris OS, which is based on OpenWRT and has a particular focus on security features. A private cloud service is also available through a partnership with Nextcloud.
There's another thread where Turris MOX has been discussed.
Here's the issues - and I'm a sound reference as I did design/deploy a board (aka Science Project) based on the Marvell 3720 platform.
1) 3720 - it's a dual core A53 at 1GHz - decent enough performer by itself, but nothing special compared to other A53 deployments (except for RPi3, while A53, they didn't elect to license the crypto acceleration instructions)
Compared to Armada 38x on Omnia, it's a step back actually, not just from a CPU perspective
2) In the reference design - one has to go thru Topaz for all interfaces - this is for EspressoBin, along with Netgate's SG-1100 (running pfsense) - topaz has issues and can block - science project did not use Topaz, we used the two native MAC's on 3720 directly with 1Gb phys, so no blocking.
3) 3720, on both EspressoBIN, and I found in my development does have some PCI-e issues, perhaps not impossible to solve, but that's why elected not to support WiFi via PCI-e expansion on Science Project.
4) 3720 gets a bit unstable above 800MHz - there it's solid, but pushing to 1.0 to 1.2, it's on the edge - good thermals are only part of the problem, the VDC bus for input and distribution is also important, as 3720 can pull power beyond spec for inrush current when booting
5) This is a remarkable design, with the MOXTEC bus - and probably why this product is delayed as it is - first mentions here were a couple of years ago.
6) Marvell - due to corporate mergers/acquisitions - getting priority support for the Armada's is getting to be a bit challenging, as they're more focused now on the big-iron server chips, not on little ones like the 3720, and wifi support for Marvell has shifted to NXP
And some idea of performance on the 3720... this is on my code for Science Project, not other 3720 boards.
Code:
# openssl speed -evp aes-128-gcm -elapsed
You have chosen to measure elapsed time instead of user CPU time.
Doing aes-128-gcm for 3s on 16 size blocks: 4207130 aes-128-gcm's in 3.00s
Doing aes-128-gcm for 3s on 64 size blocks: 1364035 aes-128-gcm's in 3.00s
Doing aes-128-gcm for 3s on 256 size blocks: 372226 aes-128-gcm's in 3.00s
Doing aes-128-gcm for 3s on 1024 size blocks: 96435 aes-128-gcm's in 3.00s
Doing aes-128-gcm for 3s on 8192 size blocks: 12032 aes-128-gcm's in 3.00s
Doing aes-128-gcm for 3s on 16384 size blocks: 6057 aes-128-gcm's in 3.00s
OpenSSL 1.1.1c 28 May 2019
built on: Thu Aug 1 18:39:26 2019 UTC
options:bn(64,64) rc4(char) des(int) aes(partial) blowfish(ptr)
compiler: aarch64-openwrt-linux-musl-gcc -fPIC -pthread -Wa,--noexecstack -Wall -O3 -Os -pipe -mcpu=cortex-a53 -fno-caller-saves -fno-plt -fhonour-copts -Wno-error=unused-but-set-variable -Wno-error=unused-result -Wformat -Werror=format-security -fstack-protector -D_FORTIFY_SOURCE=1 -Wl,-z,now -Wl,-z,relro -fpic -ffunction-sections -fdata-sections -znow -zrelro -DOPENSSL_USE_NODELETE -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_BN_ASM_MONT -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DVPAES_ASM -DECP_NISTZ256_ASM -DPOLY1305_ASM -DNDEBUG -DOPENSSL_SMALL_FOOTPRINT
The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes
aes-128-gcm 22438.03k 29099.41k 31763.29k 32916.48k 32855.38k 33079.30k
FWIW - some folks like to quote big numbers for A-53's - but in reality, not so...
Code:
# openssl speed -evp aes-128-cbc -elapsed
You have chosen to measure elapsed time instead of user CPU time.
Doing aes-128-cbc for 3s on 16 size blocks: 10535141 aes-128-cbc's in 3.00s
Doing aes-128-cbc for 3s on 64 size blocks: 8272537 aes-128-cbc's in 3.00s
Doing aes-128-cbc for 3s on 256 size blocks: 4301807 aes-128-cbc's in 3.00s
Doing aes-128-cbc for 3s on 1024 size blocks: 1514857 aes-128-cbc's in 3.00s
Doing aes-128-cbc for 3s on 8192 size blocks: 216709 aes-128-cbc's in 3.00s
Doing aes-128-cbc for 3s on 16384 size blocks: 109134 aes-128-cbc's in 3.00s
OpenSSL 1.1.1c 28 May 2019
built on: Thu Aug 1 18:39:26 2019 UTC
options:bn(64,64) rc4(char) des(int) aes(partial) blowfish(ptr)
compiler: aarch64-openwrt-linux-musl-gcc -fPIC -pthread -Wa,--noexecstack -Wall -O3 -Os -pipe -mcpu=cortex-a53 -fno-caller-saves -fno-plt -fhonour-copts -Wno-error=unused-but-set-variable -Wno-error=unused-result -Wformat -Werror=format-security -fstack-protector -D_FORTIFY_SOURCE=1 -Wl,-z,now -Wl,-z,relro -fpic -ffunction-sections -fdata-sections -znow -zrelro -DOPENSSL_USE_NODELETE -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_BN_ASM_MONT -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DVPAES_ASM -DECP_NISTZ256_ASM -DPOLY1305_ASM -DNDEBUG -DOPENSSL_SMALL_FOOTPRINT
The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes
aes-128-cbc 56187.42k 176480.79k 367087.53k 517071.19k 591760.04k 596017.15k
Let's check potential OpenVPN perf
Code:
# openvpn --genkey --secret /tmp/secret
# time openvpn --test-crypto --secret /tmp/secret --verb 0 --tun-mtu 20000 --cipher aes-128-gcm
Sun Nov 10 17:08:41 2019 disabling NCP mode (--ncp-disable) because not in P2MP client or server mode
real 0m 17.57s
user 0m 17.50s
sys 0m 0.01s
With AES-128-GCM and openVPN - it's possible to see 182 Mbit/Sec - real world is about half of that for OpenVPN.
WG - real world with a fast client - seeing about 200 Mbit/Sec