当前位置: 首页>>代码示例>>PHP>>正文


PHP Okapi::make_groups方法代码示例

本文整理汇总了PHP中okapi\Okapi::make_groups方法的典型用法代码示例。如果您正苦于以下问题:PHP Okapi::make_groups方法的具体用法?PHP Okapi::make_groups怎么用?PHP Okapi::make_groups使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在okapi\Okapi的用法示例。


在下文中一共展示了Okapi::make_groups方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的PHP代码示例。

示例1: generate_fulldump

 /**
  * Generate a new fulldump file and put it into the OKAPI cache table.
  * Return the cache key.
  */
 public static function generate_fulldump()
 {
     # First we will create temporary files, then compress them in the end.
     $revision = self::get_revision();
     $generated_at = date('c', time());
     $dir = Okapi::get_var_dir() . "/okapi-db-dump";
     $i = 1;
     $json_files = array();
     # Cleanup (from a previous, possibly unsuccessful, execution)
     shell_exec("rm -f {$dir}/*");
     shell_exec("rmdir {$dir}");
     shell_exec("mkdir {$dir}");
     shell_exec("chmod 777 {$dir}");
     # Geocaches
     $cache_codes = Db::select_column("select wp_oc from caches");
     $cache_code_groups = Okapi::make_groups($cache_codes, self::$chunk_size);
     unset($cache_codes);
     foreach ($cache_code_groups as $cache_codes) {
         $basename = "part" . str_pad($i, 5, "0", STR_PAD_LEFT);
         $json_files[] = $basename . ".json";
         $entries = self::generate_changelog_entries('services/caches/geocaches', 'geocache', 'cache_codes', 'code', $cache_codes, self::$logged_cache_fields, true, false);
         $filtered = array();
         foreach ($entries as $entry) {
             if ($entry['change_type'] == 'replace') {
                 $filtered[] = $entry;
             }
         }
         unset($entries);
         file_put_contents("{$dir}/{$basename}.json", json_encode($filtered));
         unset($filtered);
         $i++;
     }
     unset($cache_code_groups);
     # Log entries. We cannot load all the uuids at one time, this would take
     # too much memory. Hence the offset/limit loop.
     $offset = 0;
     while (true) {
         $log_uuids = Db::select_column("\n                select uuid\n                from cache_logs\n                where " . (Settings::get('OC_BRANCH') == 'oc.pl' ? "deleted = 0" : "true") . "\n                order by uuid\n                limit {$offset}, 10000\n            ");
         if (count($log_uuids) == 0) {
             break;
         }
         $offset += 10000;
         $log_uuid_groups = Okapi::make_groups($log_uuids, 500);
         unset($log_uuids);
         foreach ($log_uuid_groups as $log_uuids) {
             $basename = "part" . str_pad($i, 5, "0", STR_PAD_LEFT);
             $json_files[] = $basename . ".json";
             $entries = self::generate_changelog_entries('services/logs/entries', 'log', 'log_uuids', 'uuid', $log_uuids, self::$logged_log_entry_fields, true, false);
             $filtered = array();
             foreach ($entries as $entry) {
                 if ($entry['change_type'] == 'replace') {
                     $filtered[] = $entry;
                 }
             }
             unset($entries);
             file_put_contents("{$dir}/{$basename}.json", json_encode($filtered));
             unset($filtered);
             $i++;
         }
     }
     # Package data.
     $metadata = array('revision' => $revision, 'data_files' => $json_files, 'meta' => array('site_name' => Okapi::get_normalized_site_name(), 'okapi_version_number' => Okapi::$version_number, 'okapi_revision' => Okapi::$version_number, 'okapi_git_revision' => Okapi::$git_revision, 'generated_at' => $generated_at));
     file_put_contents("{$dir}/index.json", json_encode($metadata));
     # Compute uncompressed size.
     $size = filesize("{$dir}/index.json");
     foreach ($json_files as $filename) {
         $size += filesize("{$dir}/{$filename}");
     }
     # Create JSON archive. We use tar options: -j for bzip2, -z for gzip
     # (bzip2 is MUCH slower).
     $use_bzip2 = true;
     $dumpfilename = "okapi-dump.tar." . ($use_bzip2 ? "bz2" : "gz");
     shell_exec("tar --directory {$dir} -c" . ($use_bzip2 ? "j" : "z") . "f {$dir}/{$dumpfilename} index.json " . implode(" ", $json_files) . " 2>&1");
     # Delete temporary files.
     shell_exec("rm -f {$dir}/*.json");
     # Move the archive one directory upwards, replacing the previous one.
     # Remove the temporary directory.
     shell_exec("mv -f {$dir}/{$dumpfilename} " . Okapi::get_var_dir());
     shell_exec("rmdir {$dir}");
     # Update the database info.
     $metadata['meta']['filepath'] = Okapi::get_var_dir() . '/' . $dumpfilename;
     $metadata['meta']['content_type'] = $use_bzip2 ? "application/octet-stream" : "application/x-gzip";
     $metadata['meta']['public_filename'] = 'okapi-dump-r' . $metadata['revision'] . '.tar.' . ($use_bzip2 ? "bz2" : "gz");
     $metadata['meta']['uncompressed_size'] = $size;
     $metadata['meta']['compressed_size'] = filesize($metadata['meta']['filepath']);
     Cache::set("last_fulldump", $metadata, 10 * 86400);
 }
开发者ID:PaulinaKowalczuk,项目名称:oc-server3,代码行数:91,代码来源:replicate_common.inc.php

示例2: call


//.........这里部分代码省略.........
             $result_ref['latest_logs'] = array();
         }
         # Get all log IDs with dates. Sort in groups. Filter out latest ones. This is the fastest
         # technique I could think of...
         $rs = Db::query("\n                select cache_id, uuid, date\n                from cache_logs\n                where\n                    cache_id in ('" . implode("','", array_map('mysql_real_escape_string', array_keys($cacheid2wptcode))) . "')\n                    and " . (Settings::get('OC_BRANCH') == 'oc.pl' ? "deleted = 0" : "true") . "\n                order by cache_id, date desc, date_created desc\n            ");
         $loguuids = array();
         $log2cache_map = array();
         if ($lpc !== null) {
             # User wants some of the latest logs.
             $tmp = array();
             while ($row = mysql_fetch_assoc($rs)) {
                 $tmp[$row['cache_id']][] = $row;
             }
             foreach ($tmp as $cache_key => &$rowslist_ref) {
                 usort($rowslist_ref, function ($rowa, $rowb) {
                     # (reverse order by date)
                     return $rowa['date'] < $rowb['date'] ? 1 : ($rowa['date'] == $rowb['date'] ? 0 : -1);
                 });
                 for ($i = 0; $i < min(count($rowslist_ref), $lpc); $i++) {
                     $loguuids[] = $rowslist_ref[$i]['uuid'];
                     $log2cache_map[$rowslist_ref[$i]['uuid']] = $cacheid2wptcode[$rowslist_ref[$i]['cache_id']];
                 }
             }
         } else {
             # User wants ALL logs.
             while ($row = mysql_fetch_assoc($rs)) {
                 $loguuids[] = $row['uuid'];
                 $log2cache_map[$row['uuid']] = $cacheid2wptcode[$row['cache_id']];
             }
         }
         # We need to retrieve logs/entry for each of the $logids. We do this in groups
         # (there is a limit for log uuids passed to logs/entries method).
         try {
             foreach (Okapi::make_groups($loguuids, 500) as $subset) {
                 $entries = OkapiServiceRunner::call("services/logs/entries", new OkapiInternalRequest($request->consumer, $request->token, array('log_uuids' => implode("|", $subset), 'fields' => $log_fields)));
                 foreach ($subset as $log_uuid) {
                     if ($entries[$log_uuid]) {
                         $results[$log2cache_map[$log_uuid]]['latest_logs'][] = $entries[$log_uuid];
                     }
                 }
             }
         } catch (Exception $e) {
             if ($e instanceof InvalidParam && $e->paramName == 'fields') {
                 throw new InvalidParam('log_fields', $e->whats_wrong_about_it);
             } else {
                 /* Something is wrong with OUR code. */
                 throw new Exception($e);
             }
         }
     }
     # My notes
     if (in_array('my_notes', $fields)) {
         if ($request->token == null) {
             throw new BadRequest("Level 3 Authentication is required to access 'my_notes' field.");
         }
         foreach ($results as &$result_ref) {
             $result_ref['my_notes'] = null;
         }
         if (Settings::get('OC_BRANCH') == 'oc.pl') {
             # OCPL uses cache_notes table to store notes.
             $rs = Db::query("\n                    select cache_id, max(date) as date, group_concat(`desc`) as `desc`\n                    from cache_notes\n                    where\n                        cache_id in ('" . implode("','", array_map('mysql_real_escape_string', array_keys($cacheid2wptcode))) . "')\n                        and user_id = '" . mysql_real_escape_string($request->token->user_id) . "'\n                    group by cache_id\n                ");
         } else {
             # OCDE uses coordinates table (with type == 2) to store notes (this is somewhat weird).
             $rs = Db::query("\n                    select cache_id, null as date, group_concat(description) as `desc`\n                    from coordinates\n                    where\n                        type = 2  -- personal note\n                        and cache_id in ('" . implode("','", array_map('mysql_real_escape_string', array_keys($cacheid2wptcode))) . "')\n                        and user_id = '" . mysql_real_escape_string($request->token->user_id) . "'\n                    group by cache_id\n                ");
         }
         while ($row = mysql_fetch_assoc($rs)) {
开发者ID:Slini11,项目名称:okapi,代码行数:67,代码来源:geocaches.php


注:本文中的okapi\Okapi::make_groups方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。