Service pck 4
Author: f | 2025-04-24
Extract whole embedded PCK Remove embedded PCK Split files with embedded PCK into two separate files Supports encrypted Godot 4 PCK And also all these features are available via Extract whole embedded PCK Remove embedded PCK Split files with embedded PCK into two separate files Supports encrypted Godot 4 PCK And also all these features are available via the console. Just write
PCK-4 Options Details - ScoreTronics
BodyNet: Volumetric Inference of 3D Human Body ShapesGül Varol, Duygu Ceylan, Bryan Russell, Jimei Yang, Ersin Yumer, Ivan Laptev and Cordelia Schmid,BodyNet: Volumetric Inference of 3D Human Body Shapes, ECCV 2018.[Project page] [arXiv] Contents1. Preparation2. Training3. Testing4. Fitting SMPL modelCitationAcknowledgements1. Preparation1.1. RequirementsDatasetsDownload SURREAL and/or Unite the People (UP) dataset(s)TrainingInstall Torch with cuDNN support.Install matio by luarocks install matioInstall OpenCV-Torch by luarocks install cvTested on Linux with cuda v8 and cudNN v5.1.Pre-processing and fitting python scriptsPython 2 environment with the following installed:OpenDrChumpyOpenCVSMPL relatedDownload SMPL for python and set SMPL_PATHFix the naming: mv basicmodel_m_lbs_10_207_0_v1.0.0 basicModel_m_lbs_10_207_0_v1.0.0Do the following changes in the code smpl_webuser/verts.py:- v_template, J, weights, kintree_table, bs_style, f,+ v_template, J_regressor, weights, kintree_table, bs_style, f,- if sp.issparse(J):- regressor = J- J_tmpx = MatVecMult(regressor, v_shaped[:,0])- J_tmpy = MatVecMult(regressor, v_shaped[:,1])- J_tmpz = MatVecMult(regressor, v_shaped[:,2])+ if sp.issparse(J_regressor):+ J_tmpx = MatVecMult(J_regressor, v_shaped[:,0])+ J_tmpy = MatVecMult(J_regressor, v_shaped[:,1])+ J_tmpz = MatVecMult(J_regressor, v_shaped[:,2])+ assert(ischumpy(J_regressor))- assert(ischumpy(J))+ result.J_regressor = J_regressorDownload neutral SMPL model and place under models folder of SMPLDownload SMPLify and set SMPLIFY_PATHVoxelization relatedDownload binvox executable and set BINVOX_PATHDownload binvox python package and set BINVOX_PYTHON_PATH1.2. Pre-processing for trainingSURREAL voxelizationLoop over the dataset and run preprocess_surreal_voxelize.py for each _info.mat file by setting it with the --input option (for foreground and/or part voxels with the --parts option). The surface voxels are filled with imfill with the preprocess_surreal_fillvoxels.m script, but you could do it in python (e.g. ndimage.binary_fill_holes(binvoxModel.data)). Sample preprocessed data is included in preprocessing/sample_data/surreal.Preparing UP dataLoop over the dataset by running preprocess_up_voxelize.py to voxelize and to re-organize the dataset. Fill the voxels with preprocess_up_fillvoxels.m. Preprocess the segmentation maps with preprocess_up_segm.m. Sample preprocessed data is included in preprocessing/sample_data/up.1.3. Setup paths for trainingPlace the data under ~/datasets/SURREAL and ~/datasets/UP or change the opt.dataRoot in opts.lua. The outputs will be written to ~/cnn_saves//, you can change the opt.logRoot to change the cnn_saves location.1.4. Download pre-trained modelsWe provide several pre-trained models used in the paper bodynet.tar.gz (980MB). The content is explained in the training section. Extract the .t7 files and place them under models/t7 directory.# Trained on SURREALmodel_segm_cmu.t7model_joints3D_cmu.t7model_voxels_cmu.t7model_voxels_FVSV_cmu.t7model_partvoxels_FVSV_cmu.t7model_bodynet_cmu.t7# Trained on UPmodel_segm_UP.t7model_joints3D_UP.t7model_voxels_FVSV_UP.t7model_voxels_FVSV_UP_manualsegm.t7model_bodynet_UP.t7# Trained on MPIImodel_joints2D.t72. TrainingThere are sample scripts under training/exp/backup directory. These were created automatically using the training/exp/run.sh script. For example the following run.sh script:source create_exp.sh -hinput="rgb"supervision="segm15joints2Djoints3Dvoxels" inputtype="gt"extra_args="_FVSV"running_mode="train"#modelno=1dataset="cmu"create_cmdcmd="${return_str} \\-batchSize 4 \\-modelVoxels models/t7/model_voxels_FVSV_cmu.t7 \\-proj silhFVSV \"run_cmdgenerates and runs the following script:cd ..qlua main.lua \-dirName segm15joints2Djoints3Dvoxels/rgb/gt_FVSV \-input rgb \-supervision segm15joints2Djoints3Dvoxels \-datasetname cmu \-batchSize 4 \-modelVoxels models/t7/model_voxels_FVSV_cmu.t7 \-proj silhFVSV \This trains the final version of the model described in the paper, i.e., training end-to-end network with pre-trained subnetworks with multi-task losses and multi-view re-projection losses. If you manage to run this on the SURREAL dataset, the standard output should resemble the following:Epoch: [1][1/2000] Time: 66.197, Err: 0.170 PCK: 87.50, PixelAcc: 68.36, IOU: 55.03, RMSE: 0.00, PE3Dvol: 33.39, IOUvox: 66.56, IOUprojFV: 92.89, IOUprojSV: 75.56, IOUpartvox: 0.00, LR: 1e-03, DataLoadingTime 192.286Epoch: [1][2/2000] Time: 1.240, Err: 0.472 PCK: 87.50, PixelAcc: 21.38, IOU: 18.79, RMSE: 0.00, PE3Dvol: 44.63, IOUvox: 44.89, IOUprojFV: 73.05, IOUprojSV: 65.19, IOUpartvox: 0.00, LR: 1e-03, DataLoadingTime 0.237Epoch: [1][3/2000] Time: 1.040, Err: 0.318 PCK: 65.00, PixelAcc: 49.58, IOU: 35.99, RMSE: 0.00, PE3Dvol: Extract whole embedded PCK Remove embedded PCK Split files with embedded PCK into two separate files Supports encrypted Godot 4 PCK And also all these features are available via 52.92, IOUvox: 57.04, IOUprojFV: 86.97, IOUprojSV: 66.29, IOUpartvox: 0.00, LR: 1e-03, DataLoadingTime 0.570Epoch: [1][4/2000] Time: 1.678, Err: 0.771 PCK: 50.00, PixelAcc: 42.95, IOU: 36.04, RMSE: 0.00, PE3Dvol: 99.04, IOUvox: 52.74, IOUprojFV: 83.87, IOUprojSV: 64.07, IOUpartvox: 0.00, LR: 1e-03, DataLoadingTime 0.1012D pose (PCK), 2D body part segmentation (PixelAcc, IOU), depth (RMSE), 3D pose (PE3Dvol), voxel prediction (IOUvox), side-view and front-view re-projection (IOUprojFV, IOUprojSV) performances are reported at each iteration.The final network is a result of a multi-stage training.SubNet1 - model_segm_cmu.t7. RGB -> Segmobtained from here and the first two stacks are extractedSubNet2 - model_joints2D.t7. RGB -> Joints2Dtrained on MPII with 8 stacks, and the first two stacks are extractedSubNet3 - model_joints3D_cmu.t7. RGB + Segm + Joints2D -> Joints3Dtrained from scratch with 2 stacks using predicted segmentation (SubNet1) and 2D pose (SubNet2)SubNet4 - model_voxels_cmu.t7. RGB + Segm + Joints2D + Joints3D -> Voxelstrained from scratch with 2 stacks using predicted segmentation (SubNet1), 2D pose (SubNet2), and 3D pose (SubNet3)SubNet5 - model_voxels_FVSV_cmu.t7. RGB + Segm + Joints2D + Joints3D -> Voxels + FV + SVpre-trained from SubNet4 with the additional losses on re-projectionBodyNet - model_bodynet_cmu.t7. RGB -> Segm + Joints2D + Joints3D + Voxels + FV + SVa combination of SubNet1, SubNet2, SubNet3, SubNet4, and SubNet5fine-tuned end-to-end with multi-task lossesNote that the performance with 8 stacks is generally better, but we preferred to reduce the complexity with the cost of a little performance.Above recipe is used for the SURREAL dataset. For the UP dataset, we first fine-tuned the SubNet1 model_segm_UP.t7 (SubNet1_UP). Then, we fine-tuned SubNet3 model_joints3D_UP.t7 (SubNet3_UP) using SubNet1_UP and SubNet2. Finally, we fine-tuned SubNet5 model_voxels_FVSV_UP.t7 (SubNet5_UP) using SubNet1_UP, SubNet2, and SubNet3_UP. All these are fine-tuned end-to-end to obtain model_bodynet_UP.t7. The model used in the paper for experimenting with the manual segmentations is also provided model_voxels_FVSV_UP_manualsegm.t7.Part VoxelsWe use the script models/init_partvoxels.lua to copy the last layer weights 7 times (6 body parts + 1 background) to initialize the part voxels model (models/t7/init_partvoxels.t7). After training this model without re-projection losses, we fine-tune it with re-projection loss. model_partvoxels_cmu.t7 is the best model obtained. With end-to-end fine-tuning, we had divergence problems and did not put too much effort to make it work. Note that this model is preliminary and needs improvement.MiscA few functionalities of the code are not used in the paper; however, still provided. These include training 3D pose and voxels networks using ground truth (GT) segmentation/2D pose/3D pose inputs, as well as mixing the predicted and GT inputs at each batch. This is achieved by setting the mix option to true. The results of only using predicted inputs are often comparable to using a mix, therefore we always used only predictions. Predictions are passed as input using the applyHG option, which is not very efficient.3. TestingUse the demo script to apply the provided models on sample images.You can also use demo/demo.m Matlab script to produce visualizations.4. Fitting SMPL modelFitting scripts for SURREAL (fitting/fit_surreal.py) and UP (fitting/fit_up.py) datasets are provided with sample experiment outputs. The scripts use the optimization functions from tools/smpl_utils.py.CitationIf you use thisComments
BodyNet: Volumetric Inference of 3D Human Body ShapesGül Varol, Duygu Ceylan, Bryan Russell, Jimei Yang, Ersin Yumer, Ivan Laptev and Cordelia Schmid,BodyNet: Volumetric Inference of 3D Human Body Shapes, ECCV 2018.[Project page] [arXiv] Contents1. Preparation2. Training3. Testing4. Fitting SMPL modelCitationAcknowledgements1. Preparation1.1. RequirementsDatasetsDownload SURREAL and/or Unite the People (UP) dataset(s)TrainingInstall Torch with cuDNN support.Install matio by luarocks install matioInstall OpenCV-Torch by luarocks install cvTested on Linux with cuda v8 and cudNN v5.1.Pre-processing and fitting python scriptsPython 2 environment with the following installed:OpenDrChumpyOpenCVSMPL relatedDownload SMPL for python and set SMPL_PATHFix the naming: mv basicmodel_m_lbs_10_207_0_v1.0.0 basicModel_m_lbs_10_207_0_v1.0.0Do the following changes in the code smpl_webuser/verts.py:- v_template, J, weights, kintree_table, bs_style, f,+ v_template, J_regressor, weights, kintree_table, bs_style, f,- if sp.issparse(J):- regressor = J- J_tmpx = MatVecMult(regressor, v_shaped[:,0])- J_tmpy = MatVecMult(regressor, v_shaped[:,1])- J_tmpz = MatVecMult(regressor, v_shaped[:,2])+ if sp.issparse(J_regressor):+ J_tmpx = MatVecMult(J_regressor, v_shaped[:,0])+ J_tmpy = MatVecMult(J_regressor, v_shaped[:,1])+ J_tmpz = MatVecMult(J_regressor, v_shaped[:,2])+ assert(ischumpy(J_regressor))- assert(ischumpy(J))+ result.J_regressor = J_regressorDownload neutral SMPL model and place under models folder of SMPLDownload SMPLify and set SMPLIFY_PATHVoxelization relatedDownload binvox executable and set BINVOX_PATHDownload binvox python package and set BINVOX_PYTHON_PATH1.2. Pre-processing for trainingSURREAL voxelizationLoop over the dataset and run preprocess_surreal_voxelize.py for each _info.mat file by setting it with the --input option (for foreground and/or part voxels with the --parts option). The surface voxels are filled with imfill with the preprocess_surreal_fillvoxels.m script, but you could do it in python (e.g. ndimage.binary_fill_holes(binvoxModel.data)). Sample preprocessed data is included in preprocessing/sample_data/surreal.Preparing UP dataLoop over the dataset by running preprocess_up_voxelize.py to voxelize and to re-organize the dataset. Fill the voxels with preprocess_up_fillvoxels.m. Preprocess the segmentation maps with preprocess_up_segm.m. Sample preprocessed data is included in preprocessing/sample_data/up.1.3. Setup paths for trainingPlace the data under ~/datasets/SURREAL and ~/datasets/UP or change the opt.dataRoot in opts.lua. The outputs will be written to ~/cnn_saves//, you can change the opt.logRoot to change the cnn_saves location.1.4. Download pre-trained modelsWe provide several pre-trained models used in the paper bodynet.tar.gz (980MB). The content is explained in the training section. Extract the .t7 files and place them under models/t7 directory.# Trained on SURREALmodel_segm_cmu.t7model_joints3D_cmu.t7model_voxels_cmu.t7model_voxels_FVSV_cmu.t7model_partvoxels_FVSV_cmu.t7model_bodynet_cmu.t7# Trained on UPmodel_segm_UP.t7model_joints3D_UP.t7model_voxels_FVSV_UP.t7model_voxels_FVSV_UP_manualsegm.t7model_bodynet_UP.t7# Trained on MPIImodel_joints2D.t72. TrainingThere are sample scripts under training/exp/backup directory. These were created automatically using the training/exp/run.sh script. For example the following run.sh script:source create_exp.sh -hinput="rgb"supervision="segm15joints2Djoints3Dvoxels" inputtype="gt"extra_args="_FVSV"running_mode="train"#modelno=1dataset="cmu"create_cmdcmd="${return_str} \\-batchSize 4 \\-modelVoxels models/t7/model_voxels_FVSV_cmu.t7 \\-proj silhFVSV \"run_cmdgenerates and runs the following script:cd ..qlua main.lua \-dirName segm15joints2Djoints3Dvoxels/rgb/gt_FVSV \-input rgb \-supervision segm15joints2Djoints3Dvoxels \-datasetname cmu \-batchSize 4 \-modelVoxels models/t7/model_voxels_FVSV_cmu.t7 \-proj silhFVSV \This trains the final version of the model described in the paper, i.e., training end-to-end network with pre-trained subnetworks with multi-task losses and multi-view re-projection losses. If you manage to run this on the SURREAL dataset, the standard output should resemble the following:Epoch: [1][1/2000] Time: 66.197, Err: 0.170 PCK: 87.50, PixelAcc: 68.36, IOU: 55.03, RMSE: 0.00, PE3Dvol: 33.39, IOUvox: 66.56, IOUprojFV: 92.89, IOUprojSV: 75.56, IOUpartvox: 0.00, LR: 1e-03, DataLoadingTime 192.286Epoch: [1][2/2000] Time: 1.240, Err: 0.472 PCK: 87.50, PixelAcc: 21.38, IOU: 18.79, RMSE: 0.00, PE3Dvol: 44.63, IOUvox: 44.89, IOUprojFV: 73.05, IOUprojSV: 65.19, IOUpartvox: 0.00, LR: 1e-03, DataLoadingTime 0.237Epoch: [1][3/2000] Time: 1.040, Err: 0.318 PCK: 65.00, PixelAcc: 49.58, IOU: 35.99, RMSE: 0.00, PE3Dvol:
2025-04-0852.92, IOUvox: 57.04, IOUprojFV: 86.97, IOUprojSV: 66.29, IOUpartvox: 0.00, LR: 1e-03, DataLoadingTime 0.570Epoch: [1][4/2000] Time: 1.678, Err: 0.771 PCK: 50.00, PixelAcc: 42.95, IOU: 36.04, RMSE: 0.00, PE3Dvol: 99.04, IOUvox: 52.74, IOUprojFV: 83.87, IOUprojSV: 64.07, IOUpartvox: 0.00, LR: 1e-03, DataLoadingTime 0.1012D pose (PCK), 2D body part segmentation (PixelAcc, IOU), depth (RMSE), 3D pose (PE3Dvol), voxel prediction (IOUvox), side-view and front-view re-projection (IOUprojFV, IOUprojSV) performances are reported at each iteration.The final network is a result of a multi-stage training.SubNet1 - model_segm_cmu.t7. RGB -> Segmobtained from here and the first two stacks are extractedSubNet2 - model_joints2D.t7. RGB -> Joints2Dtrained on MPII with 8 stacks, and the first two stacks are extractedSubNet3 - model_joints3D_cmu.t7. RGB + Segm + Joints2D -> Joints3Dtrained from scratch with 2 stacks using predicted segmentation (SubNet1) and 2D pose (SubNet2)SubNet4 - model_voxels_cmu.t7. RGB + Segm + Joints2D + Joints3D -> Voxelstrained from scratch with 2 stacks using predicted segmentation (SubNet1), 2D pose (SubNet2), and 3D pose (SubNet3)SubNet5 - model_voxels_FVSV_cmu.t7. RGB + Segm + Joints2D + Joints3D -> Voxels + FV + SVpre-trained from SubNet4 with the additional losses on re-projectionBodyNet - model_bodynet_cmu.t7. RGB -> Segm + Joints2D + Joints3D + Voxels + FV + SVa combination of SubNet1, SubNet2, SubNet3, SubNet4, and SubNet5fine-tuned end-to-end with multi-task lossesNote that the performance with 8 stacks is generally better, but we preferred to reduce the complexity with the cost of a little performance.Above recipe is used for the SURREAL dataset. For the UP dataset, we first fine-tuned the SubNet1 model_segm_UP.t7 (SubNet1_UP). Then, we fine-tuned SubNet3 model_joints3D_UP.t7 (SubNet3_UP) using SubNet1_UP and SubNet2. Finally, we fine-tuned SubNet5 model_voxels_FVSV_UP.t7 (SubNet5_UP) using SubNet1_UP, SubNet2, and SubNet3_UP. All these are fine-tuned end-to-end to obtain model_bodynet_UP.t7. The model used in the paper for experimenting with the manual segmentations is also provided model_voxels_FVSV_UP_manualsegm.t7.Part VoxelsWe use the script models/init_partvoxels.lua to copy the last layer weights 7 times (6 body parts + 1 background) to initialize the part voxels model (models/t7/init_partvoxels.t7). After training this model without re-projection losses, we fine-tune it with re-projection loss. model_partvoxels_cmu.t7 is the best model obtained. With end-to-end fine-tuning, we had divergence problems and did not put too much effort to make it work. Note that this model is preliminary and needs improvement.MiscA few functionalities of the code are not used in the paper; however, still provided. These include training 3D pose and voxels networks using ground truth (GT) segmentation/2D pose/3D pose inputs, as well as mixing the predicted and GT inputs at each batch. This is achieved by setting the mix option to true. The results of only using predicted inputs are often comparable to using a mix, therefore we always used only predictions. Predictions are passed as input using the applyHG option, which is not very efficient.3. TestingUse the demo script to apply the provided models on sample images.You can also use demo/demo.m Matlab script to produce visualizations.4. Fitting SMPL modelFitting scripts for SURREAL (fitting/fit_surreal.py) and UP (fitting/fit_up.py) datasets are provided with sample experiment outputs. The scripts use the optimization functions from tools/smpl_utils.py.CitationIf you use this
2025-03-26Forum Matériel & Système Windows Fermé biobang - 12 mai 2007 à 23:37 famoune Messages postés 2 Date d'inscription samedi 27 octobre 2007 Statut Membre Dernière intervention 27 octobre 2007 - 27 oct. 2007 à 12:42 bonjour j'ai un motorola l6 mais je trouve pas de pilotes compatible avec vista klk un pourrait il m'aider ? merci A voir également: Driver motorola l6 [vista] Realtek audio driver - Télécharger - Pilotes & Matériel Windows vista - Télécharger - Divers Utilitaires Tous les driver - Télécharger - Pilotes & Matériel Driver cloud - Télécharger - Pilotes & Matériel Telecharger driver canon lbp 2900 - Télécharger - Pilotes & Matériel baladur13 Messages postés 47406 Date d'inscription mercredi 11 avril 2007 Statut Modérateur Dernière intervention 19 mars 2025 13 598 13 mai 2007 à 00:08 Prend contact avec motorola..... oui mais e pren contact ou pck avan javai xp et javai des pilotes qui marchait mais maintenant avec vista impossible de les forcer . baladur13 Messages postés 47406 Date d'inscription mercredi 11 avril 2007 Statut Modérateur Dernière intervention 19 mars 2025 13 598 15 mai 2007 à 19:14 Il te faut des pilotes a jour pour Vista.... la mise a jour ne marche pas c'est que pour xp sp2 pas pour vista baladur13 Messages postés 47406 Date d'inscription mercredi 11 avril 2007 Statut Modérateur Dernière intervention 19 mars 2025 13 598 16 mai 2007 à 20:42 Desolé..... Tente d'envoyer un mail au service technique de Motorola....Salut. famoune Messages postés 2 Date d'inscription samedi 27 octobre 2007 Statut Membre Dernière intervention 27 octobre 2007 27 oct. 2007 à 12:42 comment fais Discussions similaires
2025-04-22Care for your PCB kit if you want it to last for a long time. So we thought of giving you some tips to facilitate maintenance of your equipment-● Use a big box or container to store tools● Always put back the tools in their respective places after use● Clean tweezers and other material right after use● Keep a soft cloth to clean kit items● Store in a safe area without disturbances● Replace any broken or malfunctioning tools at the earliestImage 3: PCB KitPrecautionsMaking a PCB kit is not a difficult task. But there are some precautions to take-● Don't make the mistake of using cheap PCB tools and materials● Always include safety gear like gloves and eye protection in your kit● Don't use metal containers while etching● Use plastic tweezers or tools if you don't want them to develop corrosion● Store your PCK kit in a safe place● Keep your PCB kit out of reach of children or pets● Dispose of chemicals like etching solutions in a safe placeImage 4: ToolsGet in TouchGet in touch with us if you want a professional hand. We manufacture high-quality PCBs meeting your exact specifications. We are here for all your PCB-manufacturing needs.Special Offer: Get $100 off your order!Enjoy $100 off your order! No hidden fees and no minimum order quantity required. Email [email protected] to get started!
2025-04-23Ilike Any Data Recovery Pro.zip File Name 11:11 in 100 Mb 1 day ago File Author Description Ilike Any Data Recovery Pro - download at 4shared. Ilike Any Data Recovery Pro is hosted at free file sharing service 4shared. Checked by McAfee. No virus detected. Comments Add new comment Send Cancel 500 characters left Ilike Any Data Recovery Pro.zip zip 71 KB Sorting A – Z Z – A Smallest first Largest first Encoding Big5 Big5-HKSCS CESU-8 EUC-JP EUC-KR GB18030 GB2312 GBK IBM-Thai IBM00858 IBM01140 IBM01141 IBM01142 IBM01143 IBM01144 IBM01145 IBM01146 IBM01147 IBM01148 IBM01149 IBM037 IBM1026 IBM1047 IBM273 IBM277 IBM278 IBM280 IBM284 IBM285 IBM290 IBM297 IBM420 IBM424 IBM437 IBM500 IBM775 IBM850 IBM852 IBM855 IBM857 IBM860 IBM861 IBM862 IBM863 IBM864 IBM865 IBM866 IBM868 IBM869 IBM870 IBM871 IBM918 ISO-2022-CN ISO-2022-JP ISO-2022-JP-2 ISO-2022-KR ISO-8859-1 ISO-8859-13 ISO-8859-15 ISO-8859-2 ISO-8859-3 ISO-8859-4 ISO-8859-5 ISO-8859-6 ISO-8859-7 ISO-8859-8 ISO-8859-9 JIS_X0201 JIS_X0212-1990 KOI8-R KOI8-U Shift_JIS TIS-620 US-ASCII UTF-16 UTF-16BE UTF-16LE UTF-32 UTF-32BE UTF-32LE UTF-8 windows-1250 windows-1251 windows-1252 windows-1253 windows-1254 windows-1255 windows-1256 windows-1257 windows-1258 windows-31j x-Big5-HKSCS-2001 x-Big5-Solaris x-COMPOUND_TEXT x-euc-jp-linux x-EUC-TW x-eucJP-Open x-IBM1006 x-IBM1025 x-IBM1046 x-IBM1097 x-IBM1098 x-IBM1112 x-IBM1122 x-IBM1123 x-IBM1124 x-IBM1166 x-IBM1364 x-IBM1381 x-IBM1383 x-IBM300 x-IBM33722 x-IBM737 x-IBM833 x-IBM834 x-IBM856 x-IBM874 x-IBM875 x-IBM921 x-IBM922 x-IBM930 x-IBM933 x-IBM935 x-IBM937 x-IBM939 x-IBM942 x-IBM942C x-IBM943 x-IBM943C x-IBM948 x-IBM949 x-IBM949C x-IBM950 x-IBM964 x-IBM970 x-ISCII91 x-ISO-2022-CN-CNS x-ISO-2022-CN-GB x-iso-8859-11 x-JIS0208 x-JISAutoDetect x-Johab x-MacArabic x-MacCentralEurope x-MacCroatian x-MacCyrillic x-MacDingbat x-MacGreek x-MacHebrew x-MacIceland x-MacRoman x-MacRomania x-MacSymbol x-MacThai x-MacTurkish x-MacUkraine x-MS932_0213 x-MS950-HKSCS x-MS950-HKSCS-XP x-mswin-936 x-PCK x-SJIS_0213 x-UTF-16LE-BOM X-UTF-32BE-BOM X-UTF-32LE-BOM x-windows-50220 x-windows-50221 x-windows-874 x-windows-949 x-windows-950 x-windows-iso2022jp Continue in app Scan QR
2025-04-20