What's new

pixelserv pixelserv - A Better One-pixel Webserver for Adblock

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

np, I'm running one for some times, and completly forgot about the fact thoses sites are heavy on it.
I don't mind ads, everybody need to eat, but it has grown so much rampant it's becoming very unsettling
 
This is a minor point, but when pixelserv writes to the syslog it is four hours ahead of the syslog time. I'm -5 hours (East Coast US). Is there a way for the pixelserv entries to be on the same time?

Edit: 87U and .62A1.
Code:
Aug 28 10:26:30 rc_service: waitting "start_vpnserver1" via udhcpc ...
Aug 28 14:26:30 pixelserv[1025]: pixelserv-tls version: V35.HZ12.Kh compiled: Jul 20 2016 22:57:01 options: 192.168.0.3
Aug 28 14:26:30 pixelserv[1025]: Listening on :192.168.0.3:80
Aug 28 14:26:30 pixelserv[1025]: Listening on :192.168.0.3:443
Aug 28 10:26:31 elorimer: Started pixelserv-tls from .
Aug 28 10:26:31 rc_service: hotplug 822:notify_rc restart_nasapps

Pixelserv-tls is innocent on this issue :)

Wrong timestamps are caused by mis-use of timezone in AsusWRT. Asus got it inverted for unknown reason. Perhaps too costly to correct the mistake afterward.

To workaround, try adding "export TZ=$(cat /etc/TZ) at the front in your copy of pixelserv-tls init script in /opt/etc/init.d and with name SxxSomething.

See if this solves your observed wrong timestamp. You can apply the same workaround to other daemons with wrong timestamps.
 
Thanks for making this, nice to get a lighter browsing experience.
Took me some hours but I was able to get it running on a tiny core linux VM.

By the way, I'm experiencing a problem with the certificates : while they are correctly generated, they are missing the certificate chain information, while the CA issuer information is present and valid.
As I'm using a CA cert for internal use, I was hoping to be able to reuse it, but in fact even with a generated one, this anomaly is present :
pixelserv generated cert

The issuer should appear at the first level in the hierarchy, then only after, the blocked site should be printed one level under. When manually made, they show this correctly, but not those from pixelserv.

As comparison, a valid google cert
note : the intermediate level isn't required.

This is problematic, as the certs are in fact not fully valids. And all browsers hide this most of the time because the RFC requires to silently close the connection on an invalid cert when it's an external ressource and not the main page. On the other hand, accessing the url directly will have both Fox and IE complain about it, even after adding an exclusion.
Seems something is missing or isn't applied when pixelserv-tls is creating the certs.

For reference, currently running on the Kh version, didn't check with a previous one.

Hi @Popov. Interesting issue. Thanks for letting me know. Just to make sure I got it right. So you have an intermediate CA cert which is signed by a *real* Root CA. Then you provides this intermediate CA cert to pixelserv-tls but then see the issue you described. Am I correct?

To be honest, this scenario was never tested. I'll try to reproduce and let you know.
 
Pixelserv-tls is innocent on this issue :)

Wrong timestamps are caused by mis-use of timezone in AsusWRT. Asus got it inverted for unknown reason. Perhaps too costly to correct the mistake afterward.

To workaround, try adding "export TZ=$(cat /etc/TZ) at the front in your copy of pixelserv-tls init script in /opt/etc/init.d and with name SxxSomething.

See if this solves your observed wrong timestamp. You can apply the same workaround to other daemons with wrong timestamps.
Awesome! Works fine.
 
By the way, I'm experiencing a problem with the certificates : while they are correctly generated, they are missing the certificate chain information, while the CA issuer information is present and valid.
As I'm using a CA cert for internal use, I was hoping to be able to reuse it, but in fact even with a generated one, this anomaly is present :
pixelserv generated cert

Confirmed an issue in ver. Kh. Thanks for reporting. It's a very interesting problem!
The issuer should appear at the first level in the hierarchy, then only after, the blocked site should be printed one level under. When manually made, they show this correctly, but not those from pixelserv.

Right the blocked site shall be one level below the issuer CA but the intermediate CA won't be on the first if it isn't Root CA. So depends on the level of intermediates we'll see a hierarchy..

This is problematic, as the certs are in fact not fully valids. And all browsers hide this most of the time because the RFC requires to silently close the connection on an invalid cert when it's an external ressource and not the main page. On the other hand, accessing the url directly will have both Fox and IE complain about it, even after adding an exclusion.
Seems something is missing or isn't applied when pixelserv-tls is creating the certs.

From my investigation, the auto generated certificates are still valid. Other than that all your descriptions are correct!

The issue is my tls code that doesn't build a certificate chain. It isn't there at all because I didn't anticipate such usage scenario. In hindsight, it's very good use and perhaps useful in SME environment. Quite easy to fix once I understand the problem. Below is a screenshot of an enhanced version. I wan to shuffle the code a bit for speed. Will push the change after that. If you're interested in early test, I can make arm, amd64 or mips available for you.
screenshot.png
 
Does anyone have thoughts on the max timeout setting? I noticed that something was taking the default setting of 10 seconds, so I have been toying with lowering it. But I realized that I wasn't sure what the pros and cons of changing the value might be.
 
Does anyone have thoughts on the max timeout setting? I noticed that something was taking the default setting of 10 seconds, so I have been toying with lowering it. But I realized that I wasn't sure what the pros and cons of changing the value might be.

There are still some connections last until maximum timeout. Would be good to finger out what are them. I'm not too bothered with it and use default. The average processing time looks pretty good in my case after serving hundreds of thousands of requests.
Screen Shot 2016-09-02 at 4.02.59 PM.png

By reducing max timeout, you potentially speed up client responsiveness. This will be beneficial if a non-trivial number of client requests persisting until timeout. The average processing time in such case will be large. @mstombs has a preference to specify max timeout from his old habit (?). He might bring in a different perspective.
 
A timeout is needed because sometime browser processes never close the connection, we started with 2 seconds, then upped to 10 seconds as default, and a few more connections cleanly closed, but not all - it just makes sure you do not lots of stuck processes hanging around. Could well be client device/browser dependent. You can change the timeout from the pixelserv start parameters, I currently have "-o 2" from previous testing using wireshark/tcpdump to try to understand why the comms are broken - never finished! Wireshark used to highlight the forcible closed communication in red. I also have a pet hate against Microsoft processes that seem to have 30 or 300 second timeouts, on a lan you expect replies in milliseconds, but never really got o the bottom of it.
 
Make it generic. A root CA cert or an intermediate CA cert (sitting at arbitrary level of the trust hierarchy) supplied to pixelserv will all work.

An example of providing pixelserv with a CA cert two level down from the root:
screenshot.png
This is fun :)
 
The issue is my tls code that doesn't build a certificate chain. It isn't there at all because I didn't anticipate such usage scenario. In hindsight, it's very good use and perhaps useful in SME environment. Quite easy to fix once I understand the problem. Below is a screenshot of an enhanced version. I wan to shuffle the code a bit for speed. Will push the change after that. If you're interested in early test, I can make arm, amd64 or mips available for you.
View attachment 7199

No rush, but if needed I can test, and compile it directly : I had to build a x86 version from the Kh release.
VMware has some trouble running low ressources distribs in x64 mode, and ask for more than double memory requirement just to boot the kernel (ie : around 100+ mb when 44mb is enough)

By the way, while you're on it, the x86 section in the makefile is incomplete : each $(CC32) line is missing the $(SHAREDLIB) variable
Also, the pixelserv-tls-XXXX/openssl/ directory requires the i386/ subdir for the libraries
With this fixed, I could compile pixelserv-tls without error on an Ubuntu VM with gcc-multilib toolset (both x86/x64 )

I also found a little behavior difference between x86 and x64, dunno if it is intended :
on the x64 version, started as root, with -u pixelserv param, there's a single process, handling both http(s) response and cert creation
on the x86 version, same settings, there are 2 permanents processes, parent is under the pixelserv user, the child as the root user, like this :
Code:
PID  PPID  USER  COMMAND  COMMAND
2291  1 pixelser /mnt/sda1/server/pixelserv/pixelserv/pixelserv.x86.performance.static -u pixelserv -z /mnt/sda1/server/pixelserv/conf
2292  2291 root  /mnt/sda1/server/pixelserv/pixelserv/pixelserv.x86.performance.static -u pixelserv -z /mnt/sda1/server/pixelserv/conf
The root user process is handling the cert creation part, the pixelserv user process is doing the http(s) response.
It's also noticeable on the cert files created, as they will be owned by root

This aside, it's working as usual
 
@Popov I pushed a set of working changes. Pls update your local copy with "git pull branch master" to retrieve the delta and build. From my test so far the average processing time increased 5 times (from ~50ms to ~250ms) which I would hope to reduce a bit in the final change.

By the way, while you're on it, the x86 section in the makefile is incomplete : each $(CC32) line is missing the $(SHAREDLIB) variable
Also, the pixelserv-tls-XXXX/openssl/ directory requires the i386/ subdir for the libraries
With this fixed, I could compile pixelserv-tls without error on an Ubuntu VM with gcc-multilib toolset (both x86/x64 )

I tried x86 build long time ago in my amd64 Linux, and never made it. x86 (like the other supported platforms) was inherited from the fork's parent. Would be great if you could submit the working changes (In case other readers wonder, I didn't try for Tomatoware section and pretty sure it won't build).

I also found a little behavior difference between x86 and x64, dunno if it is intended :
on the x64 version, started as root, with -u pixelserv param, there's a single process, handling both http(s) response and cert creation
on the x86 version, same settings, there are 2 permanents processes, parent is under the pixelserv user, the child as the root user, like this :

pixelserv-tls can be built as fat processes for multiprocessing or with pthread as a multi-threaded single process. It depends on the build command line options. Seems to me that x64 version was built with pthread but x86 was built with fat processes. It's possible to build x86 with pthread.
 
Pushed further changes to the main trunk. Now users using root CA cert shall be as fast as before. Users of intermediate CA cert shall be faster than last commit.
 
Thanks.
Compiled as x86 binary without error, I'll post under the how-to
And effectively build the chain information. But it is incomplete.

pixelserv-tls only use the CN= from the CA intermediate cert to build the chain, and give this as a result :
Code:
openssl s_client -connect doubleclick.net:443 -showcerts -servername doubleclick.net -CAfile ./ca.crt
CONNECTED(00000003)
depth=0 CN = doubleclick.net
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN = doubleclick.net
verify error:num=21:unable to verify the first certificate
verify return:1
---
Certificate chain
0 s:/CN=doubleclick.net
  i:/CN=Elysium
1 s:/CN=Elysium
  i:/CN= (...)

I was surprised at first with the errors, and all browsers reported an empty chain. So I though about a cache problem (surprising with ssl certs)
But openssl gave the full chain, so I spend some time around it to investigate.

And then, I tried the same command on one on my working server :
Code:
Certificate chain
0 s:/CN=...
  i:/C=../ST=.../L=.../O=.../CN=Elysium
1 s:/C=../ST=.../L=.../O=.../CN=Elysium
  i: ...
Well, openssl is a bi* , no surprise here. It wants the full cert subject in the name, and not only the CN :
/C= Country (example : GB)
/ST= State (London)
/L= Location (London)
/O= Organization (Headquarters)
/OU= Organizational Unit (Department )
/CN= Common Name (example.com)
With any of them, if not all, allowed to be missing

Sorry for the extra work
 
Last edited:
For the x86 build :
package commands are for ubuntu 16.04, x64, but for other distributions, this could help :
http://www.cyberciti.biz/tips/compile-32bit-application-using-gcc-64-bit-linux.html

- first, the gcc x86 prereq :
Code:
sudo apt-get install lib32ncurses5 lib32z1 g++-multilib
sudo apt-get install libc6:i386 libc6-dev:i386 libgcc1:i386 linux-libc-dev:i386
note : it is expected that "build-essential" packages are already installed.

- next, openssl and zip for pixelserv :
Code:
sudo apt-get install openssl libssl-dev libssl-dev:i386 zip upx
non-listed package should be automatically taken in as dependencies

- build the missing directories in the pixelserv-tls source dir
Code:
mkdir openssl/i386

- add the ssl x86 librairies in openssl
Code:
ln -s /usr/lib/i386-linux-gnu/libcrypto.a openssl/i386/
ln -s /usr/lib/i386-linux-gnu/libssl.a openssl/i386/
ln -s /usr/include/openssl openssl/include/

- apply this patch to Makefile
note : only for versions before Ki, as the x86 section was incomplete. It shouldn't be needed now
Code:
--- Makefile  2016-09-05 21:11:04.843274410 +0200
+++ Makefile.x86  2016-09-05 21:22:38.501851091 +0200
@@ -27,7 +27,7 @@
UPX  := upx -9

# packaging macros
-PFILES  = LICENSE README.md dist/$(DISTNAME).$(ARCH).performance.*
+PFILES  = LICENSE README.md dist/$(DISTNAME).$@.performance.*
PVERSION  := $(shell grep VERSION util.h | awk '{print $$NF}' | sed 's|\"||g')
PCMD  := zip

@@ -88,14 +88,17 @@
printver:
  @echo "=== Building $(DISTNAME) version $(PVERSION) ==="

+x86: ARCH = i386
+x86: LDFLAGS += -lpthread
+x86: CFLAGS += -DUSE_PTHREAD
x86: printver dist
  @echo "=== Building x86 ==="
-  $(CC32) $(CFLAGS_D) $(LDFLAGS_D) $(OPTS) $(SRCS) -o dist/$(DISTNAME).$@.debug.dynamic
-  $(CC32) $(CFLAGS_P) $(LDFLAGS_P) $(OPTS) $(SRCS) -o dist/$(DISTNAME).$@.performance.dynamic
-  $(CC32) $(CFLAGS_D) -static $(LDFLAGS_D) $(OPTS) $(SRCS) -o dist/$(DISTNAME).$@.debug.static
-  $(CC32) $(CFLAGS_P) -static $(LDFLAGS_P) $(OPTS) $(SRCS) -o dist/$(DISTNAME).$@.performance.static
+  $(CC32) $(CFLAGS_D) $(LDFLAGS_D) $(OPTS) $(SRCS) -o dist/$(DISTNAME).$@.debug.dynamic $(SHAREDLIB)
+  $(CC32) $(CFLAGS_P) $(LDFLAGS_P) $(OPTS) $(SRCS) -o dist/$(DISTNAME).$@.performance.dynamic $(SHAREDLIB)
+  $(CC32) $(CFLAGS_D) -static $(LDFLAGS_D) $(OPTS) $(SRCS) -o dist/$(DISTNAME).$@.debug.static $(STATICLIB) $(SHAREDLIB)
+  $(CC32) $(CFLAGS_P) -static $(LDFLAGS_P) $(OPTS) $(SRCS) -o dist/$(DISTNAME).$@.performance.static $(STATICLIB) $(SHAREDLIB)
  $(STRIP) dist/$(DISTNAME).$@.performance.*
-  $(UPX) dist/$(DISTNAME).$@.performance.*
+#  $(UPX) dist/$(DISTNAME).$@.performance.*
  rm -f dist/$(DISTNAME).$(PVERSION).$@.zip
  $(PCMD) dist/$(DISTNAME).$(PVERSION).$@.zip $(PFILES)
Tthe final Makefile should be the one with the x86 changes in it. If there are 2 Makefile files, rename/link the x86 as Makefile

- and build
Code:
make distclean
make x86

Binaries are in the " dist " subdir

---------------------------------------------------------
Aside a little fix for the zip and the UPX deactivation, the patch only changes the x86 section :
$(STATICLIB) is added to the 2 static build commands
$(SHAREDLIB) is added to the 4 build commands
and add the 3 following variables in the x86 header : ARCH, LDFLAGS and CFLAGS

At first, USE_PTHREAD wasn't present, but after reading your answer, kvic, I got a closer look to the makefile structure. And understood why there was a difference in behavior with the x86 build I did.


For the record, I'm not a developper, but I tinker ... a lot :D
 
Last edited:
@Popov Very nice post! I've pushed your changes for x86 to git.

Well, openssl is a bi* , no surprise here. It wants the full cert subject in the name, and not only the CN :
/C= Country (example : GB)
/ST= State (London)
/L= Location (London)
/O= Organization (Headquarters)
/OU= Organizational Unit (Department )
/CN= Common Name (example.com)
With any of them, if not all, allowed to be missing

Sorry for the extra work

Openssl is being good. I can reproduce the issue on Openssl and Safari. The problem lies in my code where presumptions break down on properly prepared certificates for production use. I have some idea how to fix.

You have pixelserv-tls challenged. That's a very good thing :)

EDIT 2:

Fixed issuer problem and pushed to git. Please get the latest code. Also remove old certificates that are auto generated by pixelserv (i.e. everything except ca.crt and ca.key). The fix requires re-generation of auto certs.

From my test, now it's good but I would count on your verification.
 
Last edited:
Tried the new build, and ... whoops :
Code:
The certificate will not be valid until tuesday 6 september 2016 18:42.
The current time is tuesday 6 september 2016 18:41.
Error code: MOZILLA_PKIX_ERROR_NOT_YET_VALID_CERTIFICATE

Kidding, I have just a slight delay with the workstation, always being 2 mins behind despite the ntp client.

The last version is working perfectly, all certs are regenerated with the full CA information, and are finally considerated as valid by each browsers (IE/Chrome/fox)
Thanks ;)

On a different note, maybe I'm nitpicking, but the git repo is missing the i386 subdir under openssl

Also, I think you can remove this line in the makefile :
line 67 : # - static is not built for x86* targets because it causes glibc-related complaints
Especially as I'm running the static version since the first build :D
Well, there are some complains when compiling, but it's only syntax check warning, nothing preventing the build, and it happens on both x64 and x86 versions
 
Tried the new build, and ... whoops :
Code:
The certificate will not be valid until tuesday 6 september 2016 18:42.
The current time is tuesday 6 september 2016 18:41.
Error code: MOZILLA_PKIX_ERROR_NOT_YET_VALID_CERTIFICATE

Kidding, I have just a slight delay with the workstation, always being 2 mins behind despite the ntp client.

I've seen this first hand on one of my VPS. Two minutes ahead..I blamed the admin but seems might not be his fault. Do you happen to know the cause?

The last version is working perfectly, all certs are regenerated with the full CA information, and are finally considerated as valid by each browsers (IE/Chrome/fox)
Thanks ;)

Good to hear, thanks!

On a different note, maybe I'm nitpicking, but the git repo is missing the i386 subdir under openssl

Added.

Also, I think you can remove this line in the makefile :
line 67 : # - static is not built for x86* targets because it causes glibc-related complaints
Especially as I'm running the static version since the first build :D
Well, there are some complains when compiling, but it's only syntax check warning, nothing preventing the build, and it happens on both x64 and x86 versions

Removed the obsolete comment. That whole comment block was inherited from the fork's parent.

x86 finally a first class citizen in pixelserv. :)
 
For the x86 build :

Added a link in the first post to this How-To.

From my test so far the average processing time increased 5 times (from ~50ms to ~250ms) which I would hope to reduce a bit in the final change.

Pushed optimization to git. Now intermediate CA cert is as fast as using root CA cert! Both shall be <50ms on processing each HTTPS request..on average. Note that your time may vary a bit if you're using a slow USB thumb drive. :)

Can't believe I'll have a non-trivial overhaul to this project after one year. We're very close to version Ki.
 
I've seen this first hand on one of my VPS. Two minutes ahead..I blamed the admin but seems might not be his fault. Do you happen to know the cause?
If it's a Windows system, yeah, since Vista, Microsoft borked the NTP client : it doesn't play nice with a non-Windows computer as NTP source, as it still expects it to work like a Windows server would.

To fix it, I finally nailed it yesterday : you need to run this command under an admin elevated cmd
Code:
w32tm /config /manualpeerlist:TARGET.NTP.SERVER,0x8 /syncfromflags:MANUAL /reliable:yes /update
Replace TARGET.NTP.SERVER with the dns or ip of your NTP source.
the 0x8 flag force the ntp client in real NTP mode, and not the default Windows mode.
syncfromflags:MANUAL instruct to sync from the peer list, and not looking for a Windows server on the domain.

You can use this command to check the current status : w32tm /query /status

Basically, you have to skip the graphical UI to manually configure the NTP client, because half of the options aren't available on the UI, and the missing settings won't work at their default values

If it's a linux : is the NTP client really started ? :D
 
Code:
iptables -t nat -A PREROUTING --dest 10.8.10.8 -p tcp --dport 80 -j DNAT --to-dest 192.168.1.1:8080
iptables -t nat -A PREROUTING --dest 10.8.10.8 -p tcp --dport 443 -j DNAT --to-dest 192.168.1.1:8088

when using adblock 2.0, what ip should i put instead of 10.8.10.8?
 

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top