当前位置: 首页>>代码示例>>Java>>正文


Java StringTokenizer.countTokens方法代码示例

本文整理汇总了Java中com.sun.squawk.util.StringTokenizer.countTokens方法的典型用法代码示例。如果您正苦于以下问题:Java StringTokenizer.countTokens方法的具体用法?Java StringTokenizer.countTokens怎么用?Java StringTokenizer.countTokens使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在com.sun.squawk.util.StringTokenizer的用法示例。


在下文中一共展示了StringTokenizer.countTokens方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: parseRelocationFile

import com.sun.squawk.util.StringTokenizer; //导入方法依赖的package包/类
/**
 * Parses a given file containing relocation information and updates the {@link relocationTable relocation table}
 * for a given object memory and its parents. Each line in a relocation file must be of the format '<url>=<address>'.
 *
 * @param file    the name of the file containing relocation information
 * @param om      an object memory
 * @throws IOException
 */
private void parseRelocationFile(String file, ObjectMemory om) throws IOException {
    if (!new File(file).exists()) {
        if (!file.equals("squawk.reloc")) {
            throw new RuntimeException(file + " does not exist");
        }
    }

    Properties properties = new Properties();
    BufferedReader br = new BufferedReader(new FileReader(file));
    String line = br.readLine();
    int lno = 1;
    while (line != null) {
        StringTokenizer st = new StringTokenizer(line, "=");
        if (st.countTokens() != 2) {
            throw new RuntimeException(file + ":" + lno + ": does not match '<url>=<address>' pattern");
        }
        properties.setProperty(st.nextToken(), st.nextToken());
        line = br.readLine();
        lno++;
    }

    relocationTable = new Hashtable<ObjectMemory, Address>();
    if (setRelocationFor(om, properties) == null) {
        relocationTable = null;
    }
}
 
开发者ID:tomatsu,项目名称:squawk,代码行数:35,代码来源:ObjectMemoryMapper.java

示例2: Words

import com.sun.squawk.util.StringTokenizer; //导入方法依赖的package包/类
/**
 * Creates a Words instance.
 *
 * @param trace     the substring from a trace line containing zero or more word values
 * @param hasTypes  specifies if the word values have an annotated type
 */
public Words(String trace, boolean hasTypes) {
    StringTokenizer st = new StringTokenizer(trace, ",");
    if (st.hasMoreTokens()) {
        int count = st.countTokens();
        values = new long[count];
        types = (hasTypes) ? new byte[count] : null;

        for (int i = 0; i != count; ++i) {
            String token = st.nextToken();
            if (hasTypes) {
                int index = token.indexOf('#');
                String value = token.substring(0, index);
                if (value.equals("X")) {
                    values[i] = 0xdeadbeef;
                } else {
                    values[i] = Long.parseLong(value);
                }
                types[i] = Byte.parseByte(token.substring(index + 1));
            } else {
                if (token.equals("X")) {
                    values[i] = 0xdeadbeef;
                } else {
                    values[i] = Long.parseLong(token);
                }
            }
        }
    } else {
        values = NO_VALUES;
        types = null;
    }
}
 
开发者ID:tomatsu,项目名称:squawk,代码行数:38,代码来源:TraceViewer.java

示例3: loadConstants

import com.sun.squawk.util.StringTokenizer; //导入方法依赖的package包/类
public static void loadConstants() {
    try {
        //get a connection to the constants file and read it
        final String fileName = "file:///" + CONSTANTS_FILE_NAME;
        printIfDebug("Opening constants file: " + fileName);
        FileConnection commandFileConnection = (FileConnection) Connector.open(fileName, Connector.READ);
        DataInputStream commandFileStream = commandFileConnection.openDataInputStream();
        StringBuffer fileContentsBuffer = new StringBuffer((int) commandFileConnection.fileSize());

        //read characters from the file until end of file is reached
        byte[] buff = new byte[255];
        while (commandFileStream.read(buff) != -1) {
            fileContentsBuffer.append(new String(buff));
            //inefficient, but with long files necessary
            buff = new byte[255];
        }
        String fileContents = fileContentsBuffer.toString();
        printIfDebug("Constants file output: " + fileContents);
        StringTokenizer lineTokenizer = new StringTokenizer(fileContents, "\n");
        CONSTANTS = new Vector(lineTokenizer.countTokens());

        //for each line, split into space-separated tokens
        while (lineTokenizer.hasMoreTokens()) {
            String line = lineTokenizer.nextToken().trim();
            if (line.startsWith("#")) {
                continue;
            }
            StringTokenizer spaceTokenizer = new StringTokenizer(line, " ");
            //map the first two tokens
            if (spaceTokenizer.countTokens() > 1) {
                final String key = spaceTokenizer.nextToken().trim();
                final String value = spaceTokenizer.nextToken().trim();
                CONSTANTS.addElement(new Constant(key, value));
                printIfDebug("Put constant: " + key + ": " + value + ", of type " + Constant.TYPE_NAMES[((Constant) CONSTANTS.lastElement()).getType()]);
            }
        }
    } catch (Exception ex) {
        System.out.println("Could not load file " + CONSTANTS_FILE_NAME + ". Are you sure it is in the root directory of the cRIO?");
    }
}
 
开发者ID:SaratogaMSET,项目名称:649code2014,代码行数:41,代码来源:Constants.java

示例4: tokenizeData

import com.sun.squawk.util.StringTokenizer; //导入方法依赖的package包/类
/**
 * Separates input String into many Strings based on the delimiter given
 * @param input String to be tokenized
 * @return String Array of Tokenized Input String
 */
public synchronized String[] tokenizeData(String input) {
    StringTokenizer tokenizer = new StringTokenizer(input, String.valueOf(delimiter));
    String output[] = new String[tokenizer.countTokens()];
    
    for(int i = 0; i < output.length; i++) {
        output[i] = tokenizer.nextToken();
    }
    return output;
}
 
开发者ID:frc3946,项目名称:UltimateAscent,代码行数:15,代码来源:ThreadedPi.java

示例5: tokenizeData

import com.sun.squawk.util.StringTokenizer; //导入方法依赖的package包/类
/**
 * Separates input String into many Strings based on the delimiter given
 * @param input String to be tokenized
 * @return String Array of Tokenized Input String
 */
public String[] tokenizeData(String input) {
    StringTokenizer tokenizer = new StringTokenizer(input, String.valueOf(delimiter));
    String output[] = new String[tokenizer.countTokens()];
    
    for(int i = 0; i < output.length; i++) {
        output[i] = tokenizer.nextToken();
    }
    return output;
}
 
开发者ID:frc3946,项目名称:UltimateAscent,代码行数:15,代码来源:SocketPi.java


注:本文中的com.sun.squawk.util.StringTokenizer.countTokens方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。