darknet/src
2015-02-04 12:41:20 -08:00
..
activation_kernels.cu CUDA so fast 2015-01-22 16:38:24 -08:00
activations.c CUDA so fast 2015-01-22 16:38:24 -08:00
activations.h CUDA so fast 2015-01-22 16:38:24 -08:00
blas_kernels.cu CUDA so fast 2015-01-22 16:38:24 -08:00
blas.c CUDA so fast 2015-01-22 16:38:24 -08:00
blas.h CUDA so fast 2015-01-22 16:38:24 -08:00
col2im_kernels.cu CUDA so fast 2015-01-22 16:38:24 -08:00
col2im.c CUDA so fast 2015-01-22 16:38:24 -08:00
col2im.h CUDA so fast 2015-01-22 16:38:24 -08:00
connected_layer.c Stable place to commit 2015-02-04 12:41:20 -08:00
connected_layer.h CUDA so fast 2015-01-22 16:38:24 -08:00
convolutional_kernels.cu Stable place to commit 2015-02-04 12:41:20 -08:00
convolutional_layer.c Stable place to commit 2015-02-04 12:41:20 -08:00
convolutional_layer.h CUDA so fast 2015-01-22 16:38:24 -08:00
cost_layer.c CUDA so fast 2015-01-22 16:38:24 -08:00
cost_layer.h CUDA so fast 2015-01-22 16:38:24 -08:00
cpu_gemm.c Small updates 2014-04-30 16:17:40 -07:00
crop_layer_kernels.cu idk, probably something changed 2015-01-30 22:05:23 -08:00
crop_layer.c idk, probably something changed 2015-01-30 22:05:23 -08:00
crop_layer.h idk, probably something changed 2015-01-30 22:05:23 -08:00
cuda.c idk, probably something changed 2015-01-30 22:05:23 -08:00
cuda.h idk, probably something changed 2015-01-30 22:05:23 -08:00
darknet.c Stable place to commit 2015-02-04 12:41:20 -08:00
data.c idk, probably something changed 2015-01-30 22:05:23 -08:00
data.h Need to fix line reads 2014-12-28 09:42:35 -08:00
detection_layer.c Added batch to col2im, padding option 2014-07-13 22:07:51 -07:00
detection_layer.h Added batch to col2im, padding option 2014-07-13 22:07:51 -07:00
dropout_layer_kernels.cu CUDA so fast 2015-01-22 16:38:24 -08:00
dropout_layer.c CUDA so fast 2015-01-22 16:38:24 -08:00
dropout_layer.h CUDA so fast 2015-01-22 16:38:24 -08:00
freeweight_layer.c checkpoint 2014-11-18 13:51:04 -08:00
freeweight_layer.h Convolutional working on GPU 2014-10-13 00:31:10 -07:00
gemm.c Stable place to commit 2015-02-04 12:41:20 -08:00
gemm.h CUDA so fast 2015-01-22 16:38:24 -08:00
im2col_kernels.cu CUDA so fast 2015-01-22 16:38:24 -08:00
im2col.c CUDA so fast 2015-01-22 16:38:24 -08:00
im2col.h CUDA so fast 2015-01-22 16:38:24 -08:00
image.c idk, probably something changed 2015-01-30 22:05:23 -08:00
image.h Better imagenet distributed training 2014-12-11 13:15:26 -08:00
list.c Slowly refactoring and pushing to GPU 2014-05-02 15:20:34 -07:00
list.h Parsing, image loading, lots of stuff 2013-11-13 10:50:38 -08:00
matrix.c Better imagenet distributed training 2014-12-11 13:15:26 -08:00
matrix.h Better imagenet distributed training 2014-12-11 13:15:26 -08:00
maxpool_layer_kernels.cu CUDA so fast 2015-01-22 16:38:24 -08:00
maxpool_layer.c CUDA so fast 2015-01-22 16:38:24 -08:00
maxpool_layer.h CUDA so fast 2015-01-22 16:38:24 -08:00
network_kernels.cu Stable place to commit 2015-02-04 12:41:20 -08:00
network.c Stable place to commit 2015-02-04 12:41:20 -08:00
network.h CUDA so fast 2015-01-22 16:38:24 -08:00
normalization_layer.c probably how maxpool layers should be 2014-08-08 12:04:15 -07:00
normalization_layer.h Visualizations? 2014-04-16 17:05:29 -07:00
option_list.c probably how maxpool layers should be 2014-08-08 12:04:15 -07:00
option_list.h probably how maxpool layers should be 2014-08-08 12:04:15 -07:00
parser.c CUDA so fast 2015-01-22 16:38:24 -08:00
parser.h probably how maxpool layers should be 2014-08-08 12:04:15 -07:00
server.c lots of cleaning 2014-12-16 15:34:10 -08:00
server.h Distributed training 2014-12-07 00:41:26 -08:00
softmax_layer_kernels.cu CUDA so fast 2015-01-22 16:38:24 -08:00
softmax_layer.c CUDA so fast 2015-01-22 16:38:24 -08:00
softmax_layer.h CUDA so fast 2015-01-22 16:38:24 -08:00
utils.c Stable place to commit 2015-02-04 12:41:20 -08:00
utils.h Stable place to commit 2015-02-04 12:41:20 -08:00