TeleportHQ is a platform of open-source tools for UI professionals, which is a fancy way of saying they provide web designers with the tools they need to build an app or website. They've just released a video demonstrating a new real-time code generation method using TensorFlow, the open source machine learning framework from Google Brain.
Code of paper 'Learning to Parse Wireframes in Images of Man-Made Environments', CVPR 2018
Folder/file | Description |
---|---|
junc | For training junction detector. |
linepx | For training straight line pixel detector. |
wireframe.py | Generate line segments/wireframe from predicted junctions and line pixels. |
evaluation | Evaluation of junctions and wireframes. |
Requirements
- python3
- pytorch0.3.1
- opencv3.3.1
- scipy, numpy, progress, protobuf
- joblib (for parallel processing data.)
- tqdm
- [optional] dominate
The code is written and tested in
python3
, please install all requirements in python3.Prepare data
- Download the training data.
- Download imgs from OneDrive, put it in data/,
unzip v1.1.zip
. - Download annotation from OneDrive, put it in data/,
unzip pointlines.zip
.
- Download imgs from OneDrive, put it in data/,
- Preprocess data.
Note:
--json
means you put the hype-parameters in junc/hypes/1.json.Training
- train junction detector.
- train line pixel detecor.
Testing
- Test junction detector.
- Test line pixel detector.
- Combine junction and line pixel prediction.
Evaluation
The code for evaluation is put in evaluation/junc and evaluation/wireframe.Expected precision/recall curve is like and .
Visualize the result
For visualizing the result, we recommend generating an html file using dominate tovisualize the result of different methods in columns.
Citation
Abstract--
In this paper, we propose a learning-based approach to the task of automatically extracting a “wireframe” representation for images of cluttered man-made environments. The wireframe (see Fig. 1) contains all salient straight lines and their junctions of the scene that encode efficiently and accurately large-scale geometry and object shapes. To this end, we have built a very large new dataset of over 5,000 images with wireframes thoroughly labelled by humans. We have proposed two convolutional neural networks that are suitable for extracting junctions and lines with large spatial support, respectively. The networks trained on our dataset have achieved significantly better performance than stateof-the-art methods for junction detection and line segment detection, respectively. We have conducted extensive experiments to evaluate quantitatively and qualitatively the wireframes obtained by our method, and have convincingly shown that effectively and efficiently parsing wireframes for images of man-made environments is a feasible goal within reach. Such wireframes could benefit many important visual tasks such as feature correspondence, 3D reconstruction, vision-based mapping, localization, and navigation. The data and source code is available at http://thiscodeurl.
In this paper, we propose a learning-based approach to the task of automatically extracting a “wireframe” representation for images of cluttered man-made environments. The wireframe (see Fig. 1) contains all salient straight lines and their junctions of the scene that encode efficiently and accurately large-scale geometry and object shapes. To this end, we have built a very large new dataset of over 5,000 images with wireframes thoroughly labelled by humans. We have proposed two convolutional neural networks that are suitable for extracting junctions and lines with large spatial support, respectively. The networks trained on our dataset have achieved significantly better performance than stateof-the-art methods for junction detection and line segment detection, respectively. We have conducted extensive experiments to evaluate quantitatively and qualitatively the wireframes obtained by our method, and have convincingly shown that effectively and efficiently parsing wireframes for images of man-made environments is a feasible goal within reach. Such wireframes could benefit many important visual tasks such as feature correspondence, 3D reconstruction, vision-based mapping, localization, and navigation. The data and source code is available at http://thiscodeurl.