DragonPrime - LoGD Resource Community
Welcome Guest
  • Good morning, Guest.
    Please log in, or register.
  • August 23, 2019, 10:52:28 AM
Home Forums News Downloads Login Register Advanced Search
* * *
DragonPrime Menu
Login
 
 
Resource Pages
Search

Pages: [1]   Go Down
  Print  
Author Topic: Compressing the SQL traffic  (Read 2940 times)
0 Members and 1 Guest are viewing this topic.
Nightborn
Guest
« on: May 26, 2007, 12:54:12 AM »

Okay, this idea was by the glorious Edorian:
use the PHP gzcompress+gzuncompress to squeeze the $session['user']['output'] to a much smaller size to reduce the database traffic (greatly!).

If you implement that *IN* a running game, please be sure to delete ALL outputs for safety reasons (if a user has a badnav first thing after your upgrade, he will only see garbage on the screen).
Also note that I had a few issues at the start where people only saw the compressed output, but I have had no complaints for a few months now. And my database traffic has gone down quite a thing, as well as my page load time went up.
Though: it costs more CPU load for the PHP part, but much less traffic in the database server, which means the server does need to compute less input. This should basically on every system level out the cpu loss for the compression / uncompression.
Normally you only compress, only on a badnav or a login to a badnav you see the output grabbed from the table.

NOTE: these modifications are for 1.1.1 where the output lies in a different table. You have to make different ones if you have <1.1.1
Though it is not that big a deal to add a few gzcompress/uncompress.

Here is the stuff:
Code:
27         $row['output']=gzuncompress($row['output']);
 28         if (strpos("HTML",$row['output'])!==false && $row['output']!='')
 29                 $row['output']=gzuncompress($row['output']);
 30                 //check if the output needs to be unzipped again
 31                 //and make sure '' is not within gzuncompress -> error
 32                 // on rare occasions, I had double compresses.

create.php:217:                                         
$sql_output = "INSERT INTO " . db_prefix("accounts_output") . " VALUES ({$row['acctid']},'".gzcompress('',1)."');";

llib/saveuser.php:33:                   
$sql_output="UPDATE " . db_prefix("accounts_output") . " SET output='".addslashes(gzcompress($session['output'],2))."' WHERE acctid={$session['user']['acctid']};";

lib/saveuser.php:36:                            $sql_output="REPLACE INTO " . db_prefix("accounts_output") . " VALUES ({$session['user']['acctid']},'".addslashes(gzcompress($session['output'],2))."');";

Logged
Adept
Codemeister
****
Offline Offline

Posts: 279

I'm a llama!


View Profile WWW
« Reply #1 on: May 26, 2007, 07:37:45 AM »

do what i did in my 97 version ... compress  output for any lasthit>x days .. 99% of benefit .. very little of the cost
Logged
Nightborn
Guest
« Reply #2 on: May 26, 2007, 10:38:40 AM »

 Tongue huh? you mean you only compressed the output of player who were not online for x days?

but... I do compress all of it, an active player does not produce that much traffic...

I am not concerned about database size (it fits 4 times into my RAM at ease) but the traffic it has to handle.
Logged
Adept
Codemeister
****
Offline Offline

Posts: 279

I'm a llama!


View Profile WWW
« Reply #3 on: May 26, 2007, 03:50:43 PM »

actually was considering compression level 1 for online/active people and level 9 for offline/inactive .. havent got around to it.

figure would get most of benefit with reduced cpu usage
Logged
artti
Member
Captain of the Guard
***
Offline Offline

Posts: 176



View Profile
« Reply #4 on: May 17, 2019, 01:26:17 PM »

I also got curious about reducing compressed html output in database.

So implemented two functions I found in stackexchange:

Code:
function output_compression($output) {
$output = rtrim(strtr(base64_encode(gzdeflate($output, 9)), '+/', '-_'), '=');
return $output;
}

function output_decompression($output){
$output = gzinflate(base64_decode(strtr($output, '-_', '+/')));
return $output;
}

and then just made modifications in saveuser, user_lasthit and forcednavigation where output is selected or inserted. 40kb output is now 6kb. I don't even know if I should be concerned about it, but I am curious about other people experience.
Logged
pharis
Militia
**
Offline Offline

Posts: 74


Take this it's dangerous to go alone


View Profile
« Reply #5 on: May 19, 2019, 07:40:57 AM »

Why do this at all ? i mean.. nowadays, servers can handle a lot more, broadband internet is more widespread  and lotgd is not that demanding anymore in comparison with other apps. Why complicate the whole more ?
Logged
artti
Member
Captain of the Guard
***
Offline Offline

Posts: 176



View Profile
« Reply #6 on: May 20, 2019, 05:57:10 AM »

You know you are probably right. I am always over-doing things that are not necessary. Tjah.
Logged
pharis
Militia
**
Offline Offline

Posts: 74


Take this it's dangerous to go alone


View Profile
« Reply #7 on: May 20, 2019, 06:08:49 AM »

no problem :-) - its not about being right or wrong ( i might be wrong as well ), just to save you some time.
Of course as a learning experience its more than ok. After all thats the way we learn what works best and not.

 Wink
« Last Edit: May 20, 2019, 06:22:21 AM by pharis » Logged
Nightborn
Captain of the Guard
***
Offline Offline

Posts: 227


View Profile WWW
« Reply #8 on: June 23, 2019, 11:18:43 AM »

"Servers can handle a lot" ... you need to know that scaling from 1-30 users hitting a site every second is not really an issue with this sentence.
If you go to 100, or 200... or you have users who hyper through sites...

I give you 40kb.
Say you have 100 ppl hitting it per second. 40*100 = 4000, that's 4MB/sec to save in your DB.
Just the page hits. Nothing else. No useful data. That is just *if the user has a badnav* or *logs in again after timeout/logout*

So, let's say you have that all day. 24*60*60.
I'll let you do the math. =)

It's fine if you have low traffic, but this stuff there saved me tons of needless data transfers. =)
Logged
pharis
Militia
**
Offline Offline

Posts: 74


Take this it's dangerous to go alone


View Profile
« Reply #9 on: June 24, 2019, 03:20:42 AM »

I still stand by that sentence. Your point is of course correct,  i myself ran into those issues at least one time and the solution was mostly to cache stuff, use other DB engines etc..

But we are talking about lotgd here.

- servers can indeed handle much more, we all know that. Its Lotgd that cannot scale up that easily due to the way it works.
- most people have their code and the mysql server on the same machine, so we are adding CPU overhead for no big gain.
- rarely there is a lotgd site this days that has more than 20 users online at the same time .. so , traffic will stay low
- and besides, i dont think 4Mb/s is that much.  If you run a lotgd site with that much users, you should know what you are doing and therefore have a better infrastructure in place rather than the normal hoster
- if performance is a thing, why not go with mariaDB's page compression, etc , session caching with REDIS etc.. .. it all would speed things immediately, but i am not sure if LOTGD would benefit from it due to the codebase state.

That was my point.
There are other ways to get an increase in performance rather than introducing more complexity to the code.

I might be wrong in some points, but its what i would do if i had to optimize the lotgd experience for more users.
But given the general user numbers on an average lotgd server, all this would probably be overengineering and microoptimization.

i have not much experience in LOTGD hosting, so maybe someone can correct me. I would appreciate it even.

:-D
Logged
Nightborn
Captain of the Guard
***
Offline Offline

Posts: 227


View Profile WWW
« Reply #10 on: July 13, 2019, 10:14:18 AM »

I see your point(s) ^^ they are valid.

With the current installations, I think bottlenecks are generally different.
Some have slow mysql hosting, great web server speed. Others quite the opposite.

Normally any webhosting platform can run lotgd and not break a sweat, page gen times <1s.

The complexity you introduce needs to be balanced towards a potential gain.
Logged
Pages: [1]   Go Up
  Print  
 
Jump to:  


*
DragonPrime Notices
Version 1.1.2 is the current supported version and is available for download.

Support Us
No funds raised yet this year
Your help is greatly appreciated!
Recent Topics
DragonPrime LoGD
Who's Online
95 Guests, 0 Users
Home Forums News Downloads Login Register Advanced Search