应用实例

char-rnn-tf 本程序用于自动生成一段中文文本(训练语料是英文时也可用于生成英文文本),具体生成文本的内容和形式取决于训练语料。模型基本思想和karpat »

OpenAPI Gym OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms. It makes no assumptions about the structure of your agent, and is compatible with any numerical computation library, such as TensorFlow or Theano. 安装 pip install gym[all] 或者源码安装 git clone https://github.com/openai/gym cd gym pip install -e .[all] 资源 官方网站:https: »

Here we’ll describe in detail the full set of command line flags available for preprocessing, training, and sampling. Preprocessing The preprocessing script scripts/preprocess.py accepts the following command-line flags: --input_txt: Path to the text file to be used for training. Default is the tiny-shakespeare.txt dataset. --output_h5: Path to the HDF5 file where preprocessed data should be written. --output_json: Path to the JSON file where preprocessed data should be written. --val_frac: What fraction of the data to use as a validation set; default is 0. »

Modules torch-rnn provides high-peformance, reusable RNN and LSTM modules. These modules have no dependencies other than torch and nn and each lives in a single file, so they can easily be incorporated into other projects. We also provide a LanguageModel module used for character-level language modeling; this is less reusable, but demonstrates that LSTM and RNN modules can be mixed with existing torch modules. VanillaRNN rnn = nn.VanillaRNN(D, H) VanillaRNN is a torch nn. »

The MIT License (MIT) Copyright (c) 2016 Justin Johnson Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: »

torch-rnn torch-rnn provides high-performance, reusable RNN and LSTM modules for torch7, and uses these modules for character-level language modeling similar to char-rnn. You can find documentation for the RNN and LSTM modules here; they have no dependencies other than torch and nn, so they should be easy to integrate into existing projects. Compared to char-rnn, torch-rnn is up to 1.9x faster and uses up to 7x less memory. For more details see the Benchmark section below. »

Pengfei Ni ©2021