Transformers documentation

일본어 BERT (BertJapanese)

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

일본어 BERT (BertJapanese)

개요

일본어 문장에 학습된 BERT 모델 입니다.

각각 서로 다른 토큰화 방법을 사용하는 두 모델:

  • MeCab와 WordPiece를 사용하여 토큰화합니다. 이를 위해 추가 의존성 fugashi이 필요합니다. (이는 MeCab의 래퍼입니다.)
  • 문자 단위로 토큰화합니다.

MecabTokenizer를 사용하려면, 의존성을 설치하기 위해 pip install transformers["ja"] (또는 소스에서 설치하는 경우 pip install -e .["ja"]) 명령을 실행해야 합니다.

자세한 내용은 cl-tohoku 리포지토리에서 확인하세요.

MeCab과 WordPiece 토큰화를 사용하는 모델 예시:

>>> import torch
>>> from transformers import AutoModel, AutoTokenizer

>>> bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese")
>>> tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese")

>>> ## Input Japanese Text
>>> line = "吾輩は猫である。"

>>> inputs = tokenizer(line, return_tensors="pt")

>>> print(tokenizer.decode(inputs["input_ids"][0]))
[CLS] 吾輩 は 猫 で ある 。 [SEP]

>>> outputs = bertjapanese(**inputs)

문자 토큰화를 사용하는 모델 예시:

>>> bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese-char")
>>> tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese-char")

>>> ## Input Japanese Text
>>> line = "吾輩は猫である。"

>>> inputs = tokenizer(line, return_tensors="pt")

>>> print(tokenizer.decode(inputs["input_ids"][0]))
[CLS] 吾 輩 は 猫 で あ る 。 [SEP]

>>> outputs = bertjapanese(**inputs)

이는 토큰화 방법을 제외하고는 BERT와 동일합니다. API 참조 정보는 BERT 문서를 참조하세요. 이 모델은 cl-tohoku께서 기여하였습니다.

BertJapaneseTokenizer

class transformers.BertJapaneseTokenizer

< >

( vocab_file spm_file = None do_lower_case = False do_word_tokenize = True do_subword_tokenize = True word_tokenizer_type = 'basic' subword_tokenizer_type = 'wordpiece' never_split = None unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' mecab_kwargs = None sudachi_kwargs = None jumanpp_kwargs = None **kwargs )

Parameters

  • vocab_file (str) — Path to a one-wordpiece-per-line vocabulary file.
  • spm_file (str, optional) — Path to SentencePiece file (generally has a .spm or .model extension) that contains the vocabulary.
  • do_lower_case (bool, optional, defaults to True) — Whether to lower case the input. Only has an effect when do_basic_tokenize=True.
  • do_word_tokenize (bool, optional, defaults to True) — Whether to do word tokenization.
  • do_subword_tokenize (bool, optional, defaults to True) — Whether to do subword tokenization.
  • word_tokenizer_type (str, optional, defaults to "basic") — Type of word tokenizer. Choose from [“basic”, “mecab”, “sudachi”, “jumanpp”].
  • subword_tokenizer_type (str, optional, defaults to "wordpiece") — Type of subword tokenizer. Choose from [“wordpiece”, “character”, “sentencepiece”,].
  • mecab_kwargs (dict, optional) — Dictionary passed to the MecabTokenizer constructor.
  • sudachi_kwargs (dict, optional) — Dictionary passed to the SudachiTokenizer constructor.
  • jumanpp_kwargs (dict, optional) — Dictionary passed to the JumanppTokenizer constructor.

Construct a BERT tokenizer for Japanese text.

This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to: this superclass for more information regarding those methods.

build_inputs_with_special_tokens

< >

( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) List[int]

Parameters

  • token_ids_0 (List[int]) — List of IDs to which the special tokens will be added.
  • token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.

Returns

List[int]

List of input IDs with the appropriate special tokens.

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format:

  • single sequence: [CLS] X [SEP]
  • pair of sequences: [CLS] A [SEP] B [SEP]

convert_tokens_to_string

< >

( tokens )

Converts a sequence of tokens (string) in a single string.

create_token_type_ids_from_sequences

< >

( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) List[int]

Parameters

  • token_ids_0 (List[int]) — List of IDs.
  • token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.

Returns

List[int]

List of token type IDs according to the given sequence(s).

Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence

pair mask has the following format:

0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence    | second sequence |

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

get_special_tokens_mask

< >

( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) List[int]

Parameters

  • token_ids_0 (List[int]) — List of IDs.
  • token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
  • already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model.

Returns

List[int]

A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.

Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.

< > Update on GitHub