Commit Graph

  • 8b51becb6c AnotherChange UbaidKhanAtGitHub 2024-03-04 15:24:29 +0500
  • 8a0336839b Testing_Merge_Functionality UbaidKhanAtGitHub 2024-03-04 15:10:56 +0500
  • f27beae4b9
    Merge eb5efef844 into e2117f2fb8 Gourav Chowdhary 2023-09-01 16:33:59 -0700
  • 398f917a85
    Merge c6d7a29aa4 into e2117f2fb8 Aleksandr Korotaevskiy 2023-09-01 16:33:59 -0700
  • b2fc6e070d
    Update train.py UbaidKhanAtGitHub 2023-09-01 03:39:43 +0500
  • d6d798c0bd
    Update demo.py UbaidKhanAtGitHub 2023-08-31 22:43:02 +0500
  • 105eb8124c
    Update test.py UbaidKhanAtGitHub 2023-08-31 22:40:19 +0500
  • ba50ed3527
    Update train.py UbaidKhanAtGitHub 2023-08-31 22:37:04 +0500
  • 304073c399
    Update utils.py UbaidKhanAtGitHub 2023-08-31 22:31:07 +0500
  • 7d514727bc
    Merge 62c84d04b6 into e2117f2fb8 Yeom Junwoo 2023-08-18 11:28:52 +0700
  • 62c84d04b6 Char add yumjunstar 2023-08-18 02:33:32 +0000
  • a8ab4b3274 Fix utils.py for preventing KeyError at self.dict[char] and improving training quality yumjunstar 2023-08-14 14:46:58 +0900
  • 0997b9bde6
    Merge 96fd12bb96 into e2117f2fb8 Abu Bakr Soliman 2023-07-28 14:33:17 +0900
  • bd93d6166b
    Merge 6e3216f79b into e2117f2fb8 Zobeir Raisi 2023-07-28 14:32:07 +0900
  • 39f14bb891
    Merge 622ea1240b into e2117f2fb8 Young Jei 2023-07-23 20:24:02 +0400
  • f8225063fb
    Merge f54a6813ef into e2117f2fb8 Jake Loo 2023-07-19 15:50:13 +0800
  • 375e553770
    Merge 0409350bab into e2117f2fb8 Gourav Chowdhary 2023-07-18 16:15:01 +0900
  • 8fa8b77cb3
    Merge e4c624e521 into e2117f2fb8 Abu Bakr Soliman 2023-07-18 16:14:54 +0900
  • c5a871a955
    Merge 734a4e7ee4 into e2117f2fb8 f4&(h2FU*0 2023-07-18 08:27:57 +0900
  • e2117f2fb8
    Merge pull request #384 from boom1492/master master Geewook Kim 2023-07-16 15:04:32 +0900
  • c0fff493e9 Fixed transformers to always use mask for decoder Loh Zhun Yew 2023-04-12 17:26:28 +0800
  • ca1bc27de1 Added debug feature and enabled masking during validation for transformer decoder Loh Zhun Yew 2023-04-12 00:12:25 +0800
  • 9addc35638 Added supoprt to downchannel the feature output if sequence modelling is not given Loh Zhun Yew 2023-04-11 17:48:55 +0800
  • 6aa123dd67 Fixed torch decoder not removing softmax Loh Zhun Yew 2023-04-10 18:39:54 +0800
  • 6672a7950f Added positional embedding choice Loh Zhun Yew 2023-04-09 23:39:15 +0800
  • c05d2bf53e made positional embedding more abstract Loh Zhun Yew 2023-04-07 13:40:11 +0800
  • d61b64a32f Cleaned up some print statements Loh Zhun Yew 2023-04-07 11:58:21 +0800
  • 8315cdd13a Added choice for custom transformer Loh Zhun Yew 2023-04-07 11:29:28 +0800
  • eeea63c746 Fixed loss function to use permute instead of view Loh Zhun Yew 2023-04-07 08:49:31 +0800
  • 38e092d310 Updated transformers backwards to be different for testing Loh Zhun Yew 2023-04-06 20:45:13 +0800
  • 31918ea108 Fixed transformers positional encoding Loh Zhun Yew 2023-04-06 18:07:13 +0800
  • 9e5b751a02 Fixed torch transformer decoder Loh Zhun Yew 2023-04-06 16:40:57 +0800
  • 6a9d6742ca Fixed nheads Loh Zhun Yew 2023-04-03 23:49:56 +0800
  • 36dad58c93 Testing PyTorch fully transformer layer Loh Zhun Yew 2023-04-03 22:06:54 +0800
  • 7c04b39410 Fixed cuda devices Loh Zhun Yew 2023-04-03 20:49:44 +0800
  • 60b7179918 Fixed masking to work with torch in buitl transformers Loh Zhun Yew 2023-04-03 20:29:45 +0800
  • 49f82d755d Changed transformers decoder to use torch built in for testing Loh Zhun Yew 2023-04-03 18:43:50 +0800
  • db7084538c Fixed missing softmax from decoder layer Loh Zhun Yew 2023-04-03 17:59:24 +0800
  • ba5cf3bcc2 Fixed notebook for colab Loh Zhun Yew 2023-04-03 16:00:52 +0800
  • ffeeec87e5 Fixed data path for google colab Loh Zhun Yew 2023-04-03 15:43:47 +0800
  • 3c2fe2f60d Modified google colab code Loh Zhun Yew 2023-04-03 15:10:45 +0800
  • 64824061fc Fixed word embeddings to use n_chars instead of sequence length Loh Zhun Yew 2023-04-03 15:07:32 +0800
  • e346e9a475 Added post processing for transformers Loh Zhun Yew 2023-04-03 00:23:39 +0800
  • 7aa6e15dd9 modified sandbox notebook for colab Loh Zhun Yew 2023-04-02 18:35:03 +0800
  • d301c9952c Changed masking to use a really negative number instead of infinity Loh Zhun Yew 2023-04-02 18:31:32 +0800
  • 9bdfc5b947 Fixed masking to work with varying batch_size Loh Zhun Yew 2023-04-02 14:23:06 +0800
  • 9cbe968337 Added mask support for transformerdecoder Loh Zhun Yew 2023-04-02 13:59:19 +0800
  • d3839e1f47 Fixed some dataset iteration code and lowered google colab running time Loh Zhun Yew 2023-04-02 00:06:38 +0800
  • 54fc656d52 added sandbox notebook Loh Zhun Yew 2023-04-01 22:56:52 +0800
  • aecc7678f5 Added base transformer codes and addedd decoder into prediction. Aded Sandbox file which also supports google colab and training files Loh Zhun Yew 2023-04-01 22:55:17 +0800
  • 5749d497ed Fix error next iterators Junghyun Lee 2023-02-28 16:07:02 +0900
  • c0a74d0afb Update train.py ssunggun2 2022-05-11 10:17:33 +0900
  • f28a471eab Update create_lmdb_dataset.py ssunggun2 2022-05-11 10:16:22 +0900
  • c10d4a23d2 ssunggun2 2022-05-10 18:33:15 +0900
  • e8cfa8ebae test ssunggun2 2022-05-10 17:18:26 +0900
  • 734a4e7ee4
    Add batch splitting ratio check #W[_t 2022-04-08 18:28:56 +0800
  • 3a60090859 train v1 liyunfei 2022-03-31 13:47:50 +0800
  • 97614e178e add test code liyunfei 2022-03-31 13:47:39 +0800
  • a688516b49 add onnx liyunfei 2022-03-30 13:21:13 +0800
  • 6ebc7b982d fix all baseline training bugs. liyunfei 2022-03-30 10:12:39 +0800
  • c8b1b94e75 baseline train liyunfei 2022-03-29 14:07:43 +0800
  • 7b49380fc2 update .gitignore liyunfei 2022-03-28 20:15:59 +0800
  • 06302b9ef9 add my train liyunfei 2022-03-28 20:13:02 +0800
  • ef4271686d add my basic dataset liyunfei 2022-03-28 15:04:39 +0800
  • d9070562a5 add CRNN backbone liyunfei 2022-03-27 23:17:18 +0800
  • cecb3d43c3 feat : move train/trace scripts to ocr-notebooks raki dedigama 2022-03-11 16:11:12 +0200
  • e144c0b4ec chore : update README raki dedigama 2022-03-05 15:39:01 +0200
  • db487e43f6 feat : move trba test & train scripts outside package, update readme raki dedigama 2022-03-05 15:11:17 +0200
  • da770f1630 feat : update triton flags definition raki dedigama 2022-03-03 16:53:46 +0200
  • e04f0e660b chore : clean-up and document functions raki dedigama 2022-03-03 14:10:23 +0200
  • 6ff075f177 feat : add script for torchscript tracing raki dedigama 2022-02-28 08:54:32 +0200
  • 0a2e2d6509 fix : update train scripts imports raki dedigama 2022-02-27 16:25:44 +0200
  • 474d3e4163 chore : refactor trba package with triton/core modules raki dedigama 2022-02-27 16:17:06 +0200
  • 6ea4170bee Merge branch 'dev-triton' of github.com:SolteqRobotics/deep-text-recognition-benchmark into dev-triton raki dedigama 2022-02-27 13:32:53 +0200
  • 954ed73027 chore : remove training/notebook pacakges from trba requirements raki dedigama 2022-02-27 13:32:46 +0200
  • f30d9a1362 chore : remove TrbaOcr parameter from TRBATritonDetector initializatoin raki dedigama 2022-02-27 13:31:36 +0200
  • 8fd6654187 fix : add requirements file for trba / update gitignore raki dedigama 2022-02-26 16:45:32 +0200
  • fe2cbd5130 feat : update trba package with src and test scripts raki dedigama 2022-02-26 16:43:01 +0200
  • 2a90fde947 feat : change package name to trba raki dedigama 2022-02-26 15:28:40 +0200
  • cb1de204ba feat : add post-processing for triton-detection raki dedigama 2022-02-25 14:19:13 +0200
  • ead4059336 feat : separate classes for local detection / triton detection raki dedigama 2022-02-25 13:39:51 +0200
  • 9c9848ed29 feat : refactor trba_detector / triton_client raki dedigama 2022-02-24 14:27:43 +0200
  • 11620c0af7 feat : configure triton client for TRBA raki dedigama 2022-02-24 13:28:54 +0200
  • e4c624e521 support training progress bar Abu Bakr 2021-12-17 06:40:07 +0200
  • 96fd12bb96 set new parameter to save the model to a specific path Abu Bakr 2021-12-17 06:33:55 +0200
  • 0600814210 feat : read config from configparser class raki dedigama 2021-11-16 15:46:14 +0200
  • cdc575dad7 chore : remove redundant arguments for lmdb dataset preparation raki dedigama 2021-11-15 12:20:53 +0200
  • f4110f6ae0 feat : generate gt.txt file before creating lmdb datasets raki dedigama 2021-11-12 12:54:46 +0200
  • e40d861674 feat : add notebook for lmdb dataset conversion raki dedigama 2021-11-11 15:09:58 +0200
  • 39b8731e6d feat : add train_trba.py script for training TRBA model explicitly Raki Dedigama 2021-11-11 14:02:31 +0200
  • 05d76f9f1e feat : move train / test & demo scripts outside dptr package Raki Dedigama 2021-11-11 13:55:04 +0200
  • 5fe4b3dbdb chore : remove redundant print statements raki dedigama 2021-10-07 13:50:32 +0300
  • 319dac035f feat : convert tensor to cpu before converting to numpy raki dedigama 2021-10-06 15:00:58 +0300
  • 6b5946e092 feat : remove exception handling raki dedigama 2021-10-06 13:34:43 +0300
  • 06fe9c3510 feat : Exception handling for model loading raki dedigama 2021-10-06 12:37:16 +0300
  • dfe91a24df feat : move model loading to init raki dedigama 2021-10-05 16:59:58 +0300
  • 882e540903 chore : display CUDA device at Prediction raki dedigama 2021-10-05 15:59:54 +0300
  • 75c0638553 feat : specify torch device for TrbaOCR raki dedigama 2021-10-05 15:17:26 +0300
  • 25fe6d2f8d feat : predict to return ean and confidence score raki dedigama 2021-10-05 12:35:36 +0300
  • 55b65b44bd chore : add predict() func to TrbaOCR raki dedigama 2021-10-05 11:03:33 +0300