diff --git a/Software/Applications/Caffe/Caffe_SSD_目标检测.md b/Software/Applications/Caffe/Caffe_SSD_目标检测.md new file mode 100644 index 0000000..6ee6b09 --- /dev/null +++ b/Software/Applications/Caffe/Caffe_SSD_目标检测.md @@ -0,0 +1,112 @@ +# Caffe SSD 目标检测 + +基于 Ubuntu16.04 和 Python2.7。 + +## 1.Installation + +按《Ubuntu 初始配置》中的方法来安装开发工具。 + +1. Get the code. We will call the directory that you cloned Caffe into $CAFFE_ROOT + + ```bash + git clone https://github.com/weiliu89/caffe.git + cd caffe + git checkout ssd + ``` + +2. Build the code. Please follow Caffe instruction to install all necessary packages and build it. + + ```bash + # Modify Makefile.config according to your Caffe installation. + cp Makefile.config.example Makefile.config + # 通过去除 Makefile.config 中“CPU_ONLY := 1”前面的 “#” 号可选择编译 CPU 版本的 Caffe + make -j8 + # Make sure to include $CAFFE_ROOT/python to your PYTHONPATH. + make py + make test -j8 + # (Optional) + make runtest -j8 + cd caffe/python + for req in $(cat requirements.txt); do pip install $req; done + pip install -r requirements.txt + export PATHONPATH=$PATHONPATH:$CAFFE_ROOT/python/caffe + make pycaffe + # 验证 pycaffe 接口 + python + >>>import caffe + >>>exit() + ``` + +## 2.Preparation + +1. Download [fully convolutional reduced (atrous) VGGNet](https://gist.github.com/weiliu89/2ed6e13bfd5b57cf81d6). By default, we assume the model is stored in $CAFFE_ROOT/models/VGGNet/ +2. Download [VOC2007 and VOC2012 dataset](https://pjreddie.com/projects/pascal-voc-dataset-mirror/). By default, we assume the data is stored in $HOME/data/ + + ```bash + # Download the data. + cd $HOME/data + wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar + wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar + wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar + # Extract the data. + tar -xvf VOCtrainval_11-May-2012.tar + tar -xvf VOCtrainval_06-Nov-2007.tar + tar -xvf VOCtest_06-Nov-2007.tar + ``` + +3. Create the LMDB file. + +## 3.Train/Eval + +1. Train your model and evaluate the model on the fly. + + ```bash + # It will create model definition files and save snapshot models in: + # - $CAFFE_ROOT/models/VGGNet/VOC0712/SSD_300x300/ + # and job file, log file, and the python script in: + # - $CAFFE_ROOT/jobs/VGGNet/VOC0712/SSD_300x300/ + # and save temporary evaluation results in: + # - $HOME/data/VOCdevkit/results/VOC2007/SSD_300x300/ + # It should reach 77.* mAP at 120k iterations. + python examples/ssd/ssd_pascal.py + ``` + + If you don't have time to train your model, you can download a pre-trained model at [here](https://drive.google.com/open?id=0BzKzrI_SkD1_WVVTSmQxU0dVRzA). + +2. Evaluate the most recent snapshot. + + ```bash + # If you would like to test a model you trained, you can do: + python examples/ssd/score_ssd_pascal.py + ``` + +3. Test your model using a webcam. Note: press "esc" to stop. + + ```bash + # If you would like to attach a webcam to a model you trained, you can do: + python examples/ssd/ssd_pascal_webcam.py + ``` + + [Here](https://drive.google.com/file/d/0BzKzrI_SkD1_R09NcjM1eElLcWc/view) is a demo video of running a SSD500 model trained on [MSCOCO](http://mscoco.org/) dataset. + +4. Check out [examples/ssd_detect.ipynb](https://hub.fastgit.org/weiliu89/caffe/blob/ssd/examples/ssd_detect.ipynb) or [examples/ssd/ssd_detect.cpp](https://hub.fastgit.org/weiliu89/caffe/blob/ssd/examples/ssd/ssd_detect.cpp) on how to detect objects using a SSD model. Check out [examples/ssd/plot_detections.py](https://hub.fastgit.org/weiliu89/caffe/blob/ssd/examples/ssd/plot_detections.py) on how to plot detection results output by ssd_detect.cpp. +5. To train on other dataset, please refer to data/OTHERDATASET for more details. We currently add support for COCO and ILSVRC2016. We recommend using [examples/ssd.ipynb](https://hub.fastgit.org/weiliu89/caffe/blob/ssd/examples/ssd_detect.ipynb) to check whether the new dataset is prepared correctly. + +## 4.Models + +We have provided the latest models that are trained from different datasets. To help reproduce the results in [Table 6](https://arxiv.org/pdf/1512.02325v4.pdf), most models contain a pretrained `.caffemodel` file, many `.prototxt` files, and python scripts. + +1. PASCAL VOC models: + * 07+12: [SSD300*](https://drive.google.com/open?id=0BzKzrI_SkD1_WVVTSmQxU0dVRzA), [SSD512*](https://drive.google.com/open?id=0BzKzrI_SkD1_ZDIxVHBEcUNBb2s) + * 07++12: [SSD300*](https://drive.google.com/open?id=0BzKzrI_SkD1_WnR2T1BGVWlCZHM), [SSD512*](https://drive.google.com/open?id=0BzKzrI_SkD1_MjFjNTlnempHNWs) + * COCO[1]: [SSD300*](https://drive.google.com/open?id=0BzKzrI_SkD1_NDlVeFJDc2tIU1k), [SSD512*](https://drive.google.com/open?id=0BzKzrI_SkD1_TW4wTC14aDdCTDQ) + * 07+12+COCO: [SSD300*](https://drive.google.com/open?id=0BzKzrI_SkD1_UFpoU01yLS1SaG8), [SSD512*](https://drive.google.com/open?id=0BzKzrI_SkD1_X3ZXQUUtM0xNeEk) + * 07++12+COCO: [SSD300*](https://drive.google.com/open?id=0BzKzrI_SkD1_TkFPTEQ1Z091SUE), [SSD512*](https://drive.google.com/open?id=0BzKzrI_SkD1_NVVNdWdYNEh1WTA) + +2. COCO models: + * trainval35k: [SSD300*](https://drive.google.com/open?id=0BzKzrI_SkD1_dUY1Ml9GRTFpUWc), [SSD512*](https://drive.google.com/open?id=0BzKzrI_SkD1_dlJpZHJzOXd3MTg) + +3. ILSVRC models: + * trainval1: [SSD300*](https://drive.google.com/open?id=0BzKzrI_SkD1_a2NKQ2d1d043VXM), [SSD500](https://drive.google.com/open?id=0BzKzrI_SkD1_X2ZCLVgwLTgzaTQ) + +[1]We use [`examples/convert_model.ipynb`](https://github.com/weiliu89/caffe/blob/ssd/examples/convert_model.ipynb) to extract a VOC model from a pretrained COCO model. diff --git a/Software/Development/Language/Python/Package/Python_包.md b/Software/Development/Language/Python/Package/Python_包.md index e400ffc..b85bdee 100644 --- a/Software/Development/Language/Python/Package/Python_包.md +++ b/Software/Development/Language/Python/Package/Python_包.md @@ -5,9 +5,9 @@ 一般可到 搜索 Python 包,通过: ```bash -pip2 install -i http://mirrors.aliyun.com/pypi/simple/ +pip2 install [==] -i http://mirrors.aliyun.com/pypi/simple/ # 或 -pip3 install -i https://pypi.mirrors.ustc.edu.cn/simple/ +pip3 install [==] -i https://pypi.mirrors.ustc.edu.cn/simple/ ``` 来安装包。 diff --git a/Software/System/Linux/Editions/Ubuntu/Ubuntu_初始配置.md b/Software/System/Linux/Editions/Ubuntu/Ubuntu_初始配置.md index f2e248a..204381a 100644 --- a/Software/System/Linux/Editions/Ubuntu/Ubuntu_初始配置.md +++ b/Software/System/Linux/Editions/Ubuntu/Ubuntu_初始配置.md @@ -106,7 +106,7 @@ dpkg -l ## 安装开发工具 ```bash -sudo apt-get install bison build-essential make gcc gcc-multilib global git-core git openssl module-init-tools gnu-efi xz-utils debianutils iputils-ping e2fslibs-dev ccache gawk wget diffstat bc zip unzip chrpath socat texinfo cpio flex minicom xterm gtkterm parted gparted tmux python python-pip python-wand python-crypto python3 python3-pip python3-pexpect libncurses-dev libssl-dev libpciaccess-dev uuid-dev libsystemd-dev libevent-dev libxml2-dev libusb-1.0-0-dev liblz4-tool libsdl1.2-dev libssl-dev libblkid-dev libcv-dev libopencv-*-dev protobuf-c-compiler protobuf-compiler libprotobuf-dev libboost-dev libleveldb-dev libgflags-dev libgoogle-glog-dev libblas-dev liblmdb-dev libsnappy-dev libopenblas-dev python-numpy libboost-python-dev +sudo apt-get install bison build-essential make gcc gcc-multilib global git-core git openssl module-init-tools gnu-efi xz-utils debianutils iputils-ping e2fslibs-dev ccache gawk wget diffstat bc zip unzip chrpath socat texinfo cpio flex minicom xterm gtkterm parted gparted tmux python python-pip python-wand python-crypto python3 python3-pip python3-pexpect libncurses-dev libssl-dev libpciaccess-dev uuid-dev libsystemd-dev libevent-dev libxml2-dev libusb-1.0-0-dev liblz4-tool libsdl1.2-dev libssl-dev libblkid-dev libcv-dev libopencv-*-dev protobuf-c-compiler protobuf-compiler libprotobuf-dev libboost-dev libleveldb-dev libgflags-dev libgoogle-glog-dev libblas-dev liblmdb-dev libsnappy-dev libopenblas-dev python-numpy libboost-python-dev gfortran python-scikits-learn python-skimage-lib sudo pip3 install kconfiglib ```