Skip to content
Advertisement

PHP dynamical value as time() function speed in bigdata

What is the preferred (less memory consuming and fastest) approach to use time() or any similar dynamic value in billion+ iterations?

A)

$time = time();
foreach($_billion_items_array as $_i => $_v) {
  if($_v['time_saved'] < $time + rand(1,100)) {
    // do something
  }
}

B)

$this->time = time();

foreach($this->$_billion_items_array as $_i => $_v) {
  if($_v['time_saved'] < $this->time + rand(1,100)) {
    $this->do_something(_v);
  }
}

C)

$this->time = time();

function fixTime($_correction) {
  return $this->time + $_correction;
}

foreach($this->$_billion_items_array as $_i => $_v) {
  if($_v['time_saved'] < $this->fixTime(rand(1,100)) {
    $this->do_something(_v);
  }
}

I would personally prefer C) but I don’t know how will PHP use memory, if every iteration is storing time as variable? Is it the same in A) and B)?

Advertisement

Answer

A is surely the fastest, because it uses the simplest way to access your $time variable inside your loop.

C is by far the slowest, because it must invoke your function on every iteration of the loop.

All your choices use roughly the same amount of RAM.

If you were doing 10^3 iterations, none of this would make much difference. But you are doing 10^9 interactions so you should simplify the code in your loop as much as you possibly can.

And, I think you want foreach() in place of for().

User contributions licensed under: CC BY-SA
4 People found this is helpful
Advertisement