DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-9 and 11-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea(s) without significantly more.
Regarding Claim 1, analyzed as the representative claim:
[Step 1] Claim 1 recites “A data processing method…” which falls within the “process” statutory category of invention under 35 U.S.C. § 101.
[Step 2A – Prong 1] Claim 1 recites “A data processing method, performed by at least one processor of a computer device and comprising: obtaining sign language action data; determining a sign language tagging sequence corresponding to the sign language action data by performing element analysis on the sign language action data, the element analysis being based on a pre-established sign language tagging system, the sign language tagging sequence comprising tagging information of basic sign language elements corresponding to the sign language action data; and performing operation processing on the sign language action data based on the sign language tagging sequence.” The bolded limitations, under their broadest reasonable interpretation, encompass mental processes (including observation, evaluation, judgment, and opinion). That is, other than reciting that the method is performed by “at least one processor of a computer device,” nothing in the claim precludes the steps from practically being performed by a human, in the human mind, and/or with a paper and pencil. Specifically, the claim encompasses a human knowledgeable of sign language observing a sign language performance, noticing the elements of the performance (such as hand shape), and translating the sign language into a different natural language. Accordingly, the claim recites an abstract idea(s).
[Step 2A – Prong 2] The judicial exception is not integrated into a practical application. Specifically, the claim recites the additional element of a processor in a computing device performing the method steps, wherein the processor and computing device are recited at a high level of generality and merely automate the obtaining, determining, and performing steps. Therefore, this additional element amounts to no more than mere instructions to apply the exception using a generic computing device, which does not impose any meaningful limits on practicing the abstract idea(s). Thus, the claim is directed to an abstract idea(s).
[Step 2B] The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea(s) into a practical application, the additional element of the processor in a computing device performing the method steps amounts to no more than mere instructions to apply the exception using a generic computing device, which cannot provide an inventive concept. Accordingly, representative claim 1 is not patent eligible.
Claims 2-9 are dependent on representative claim 1 and include all of the limitations of claim 1. Therefore, the dependent claims recite the same abstract idea(s) as those recited in the independent claim or contain limitations drawn to generic computer components and/or reciting extra solution activities. While the dependent claims may have a narrower scope than the representative claim, no claim contains an additional element to integrate the abstract idea(s) into a practical application or to render an inventive concept that transforms the corresponding claim into a patent eligible application of the otherwise ineligible abstract idea(s). Thereby, claims 2-9 are also patent ineligible.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 5-6 and 15-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 5 recites the limitation "the performing of the disassembly and of the classification" in lines 1-2 and the limitation “the database” in lines 3 and 7. There is insufficient antecedent basis for these limitations in the claim. The Examiner notes that these limitations are recited in claim 4, so amending claim 5 to depend on claim 4 instead of claim 2 would overcome this rejection.
Similarly, Claim 15 recites the limitation “the disassembly and the classification” in line 2 and the limitation “the database” in lines 3 and 7. There is insufficient antecedent basis for these limitations in the claim. The Examiner notes that these limitations are recited in claim 14, so amending claim 15 to depend on claim 14 instead of claim 12 would overcome this rejection.
Claims 6 and 16 are rejected for being dependent on claims 5 and 15, respectively.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 6-7, 9, 11-13, 16-17, and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by JP H08115408 (hereinafter “Sagawa”).
Regarding independent Claims 1 and 11, Sagawa discloses obtaining sign language action data (fig. 1; par. 0007: “a sign language input unit 101;” fig. 2: hand movement input device 201 );
determining a sign language tagging sequence corresponding to the sign language action data by performing element analysis on the sign language action data the element analysis being based on a pre-established sign language tagging system, the sign language tagging sequence comprising tagging information of basic sign language elements corresponding to the sign language action data (par. 0005: “the sign language recognition device of the present invention breaks down the actions in sign language words into action elements such as hand shape, direction, position, movement, and relationship, and expresses sign language words as a combination of symbols representing these action elements”); and
performing operation processing on the sign language action data based on the sign language tagging sequence (par. 0006: “sign language words are expressed as combinations of action elements, and sign language words are recognized by evaluating simultaneous and sequential combinations of the recognition results of action elements that have been independently recognized… since recognition is performed in smaller units (action elements), processing is easier, and even when recognizing words, it is only necessary to integrate information regarding the presence or absence of action elements, making processing easier and enabling high-speed processing. This allows the recognition of sign language words by focusing on invariant components, even for sign language words that contain actions that change depending on the context or situation in which they are expressed. Furthermore, each action element can be recognized using a method suited to the properties of that action element, so sign language recognition can be performed flexibly, efficiently, and with high accuracy”).
Further regarding independent Claim 11, Sagawa also discloses a memory (fig. 2: memories 204-208) and one or more processors (fig. 2: device capable of running the described program inherently must have a processor), the memory storing computer-readable instructions, the computer-readable instructions, when executed by the one or more processors (par. 0008: “a device that recognizes motion elements and sign language words, and reads programs from memories 204 and 205 and performs recognition processing in accordance with the programs”), causing the one or more processors to perform [the above] operations.
Regarding Claims 2 and 12, Sagawa further discloses prior to the performing of the element analysis: establishing a sign language tagging system based on basic sign language elements and element types corresponding to each basic sign language element, the sign language tagging system comprising tagging information corresponding to the element types of each basic sign language element (par. 0005: “the sign language recognition device of the present invention breaks down the actions in sign language words into action elements such as hand shape, direction, position, movement, and relationship, and expresses sign language words as a combination of symbols representing these action elements… the motion elements (sequential elements and simultaneous elements) of the required motion are recognized independently and stored in the storage means;” par. 0008: “In FIG. 4, an action element name 401 indicates the name of the action element whose parameters are used in the recognition process, a parameter count 402 indicates the number of parameters used to recognize the action element, and 403 to 405 indicate each parameter”).
Regarding Claims 3 and 13, Sagawa further discloses the basic sign language elements comprise at least one of a left or right arm feature, a one or both handshape feature, an orientation motion feature, a knuckle bending angle, a facial expression feature, or constraint information (par. 0012: “various types of motion elements to be recognized, such as the shape, direction, position, and movement of the hand;” par. 0008: “hand movement input device 201 converts the hand movement of sign language into multidimensional time series data including the bending angle of the fingers, the position of the hand, and the like;” par. 0019: “an image input device 2102 may be used to recognize facial expressions, mouth movements, facial movements, and the like”).
Regarding Claims 6 and 16, Sagawa further discloses the action feature comprises at least one of rotation data, displacement data, a bending angle, a key feature, or an expression feature (par. 0008: “hand movement input device 201 converts the hand movement of sign language into multidimensional time series data including the bending angle of the fingers, the position of the hand, and the like;” par. 0009: “the hand position is further comprised of x-axis data 302, y-axis data 303, and z-axis data 304;” par. 0019: “an image input device 2102 may be used to recognize facial expressions, mouth movements, facial movements, and the like”).
Regarding Claims 7 and 17, Sagawa further discloses the determining of the sign language tagging sequence comprises:
a first element type and a first timestamp of the first basic sign language element (par. 0015: “an action element has been detected, and in step 705, information about the detected action element, consisting of the start time, end time, and evaluation value, is stored in the action element recognition result memory 208 in FIG. 2”);
determining, based on the pre-established sign language tagging system, first tagging information of the first basic sign language element and second tagging information of the first element type (par. 0009: “the hand position, direction, and finger bending at times t1, t2, and tn, respectively. In this way, actions in sign language are expressed as time-series data consisting of hand position 301, hand direction 305, and finger bending 309”); and
determining, based on the first timestamp, the first tagging information, and the second tagging information, the sign language tagging sequence corresponding to the sign language action data (par. 0005: “breaks down the actions in sign language words into action elements such as hand shape, direction, position, movement, and relationship, and expresses sign language words as a combination of symbols representing these action elements. These combinations include operation elements connected sequentially (in time series) and operation elements connected simultaneously (in parallel). In the recognition process, first, the motion elements (sequential elements and simultaneous elements) of the required motion are recognized independently and stored in the storage means. Next, the recognition results stored in the storage means are searched for the action elements necessary for the sign language word to be recognized, and the sign language word is recognized by examining simultaneous and sequential combinations of the components of the searched action. Furthermore, the components of each action are recognized using the most suitable recognition method for each component).
Regarding Claims 9 and 19, Sagawa further discloses the performing of the operation processing comprises: performing sign language translation processing on the sign language action data based on the sign language tagging sequence, to obtain a target text sequence corresponding to the sign language action data (fig. 6; par. 0005: “the sign language recognition device of the present invention breaks down the actions in sign language words into action elements;” par. 0008: “The output device 203 is a device that outputs the recognition results of [input] sign language words, and can use text output”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 4-5 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Sagawa as applied to claims 1 and 11, respectively above, and further in view of CN 101794528 (hereinafter “Shi”).
Regarding Claims 4 and 14, Sagawa discloses a database of sign language action data (par. 0007: “action elements stored in the sign language word dictionary 112”) but does not explicitly disclose the previous disassembly and classification performed to obtain that data. However, Shi discloses the establishing of the sign language tagging system comprises: performing disassembly and classification on sign language action data in a database, to obtain the basic sign language elements and the element types corresponding to each basic sign language element (fig. 3; par. 0007: “a training completion sign language action feature classifier for training language motion division sign language action feature database;” pars. 0093-0094: “sign language action classifier can be trained using the support vector machine method to finish the training of the language database… analysis system 2 outputs a sign language action feature information for classifying and identifying”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the sign language translation device of Sagawa with the disassembly and classification of sign language action data of Shi in order to increase the action identification rate (Shi, par. 0015). Moreover, Sagawa already discloses this database, so a method of obtaining these elements is implied to be required (Sagawa, par. 0007).
Regarding Claims 5 and 15, Examiner is interpreting that “the performing of the disassembly and of the classification” refers to the same disassembly and classification recited in claims 4 and 14, respectively (see rejection of claims 4 and 14 under 35 U.S.C. § 112(b) above). With that understanding, Sagawa modified by Shi discloses the performing of the disassembly and of the classification comprises: traversing the sign language action data in the database, performing action disassembly on the sign language action data, and determining a key part corresponding to each piece of the sign language action data and an action feature of the key part (Shi, fig. 3: training the classifier involves iterating though a sign language action collection and performing feature extraction; Examiner interprets the feature extraction is implicitly extracting the key part of the given sign language action);
performing classification processing on the key part corresponding to each piece of the sign language action data in the database and the action feature of the key part, to obtain at least two class clusters (Shi, par. 0007: “sign language action feature classifier for training language motion division sign language action feature database;” par. 0110: “the classifier identification process in the DSP unit of the portable system can in real time to identify the class to the feature signal information”), each class cluster corresponding to one basic sign language element (Sagawa, par. 0008: resulting obtained data is “represents combinations of action elements necessary for recognizing each sign language word”); and
determining, based on an action feature of each class cluster, element types of a basic sign language element corresponding to the class cluster (Sagawa, par. 0010: “FIG. 5 is a diagram showing the format of the sign language word dictionary stored in the sign language word dictionary memory of the present invention. In FIG. 5, a sign language word name 501 indicates the name of a sign language word represented by the combination of the action elements. The action type 502;” Examiner notes that because the resulting obtained data includes type 502, there inherently must have been a step of determining that element type).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the sign language translation device of Sagawa with the disassembly and classification of sign language action data of Shi under the same rationale provided for claims 4 and 14 above. Additionally, and to reiterate, Sagawa already discloses a database from which sign language elements and types have been extracted and classified (Sagawa, par. 0007), and Shi merely discloses such a method of classifying sign language action data from a database (Shi, par. 0007).
Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Sagawa as applied to claims 1 and 11, respectively, above, and further in view of US 2021/0160580 (hereinafter “Janugani”).
Regarding Claims 8 and 18, Sagawa does not disclose a character model performing the sign language action. However, Janugani discloses the performing of the operation processing comprises: driving, based on the sign language tagging sequence, a pre-established three-dimensional character model to perform a sign language action corresponding to the sign language action data (par. 0034: “an animation processor is included and is configured to generate animated sign language interpretation videos… the animation process may be configured to animate a character to perform the signing”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the sign language translation device of Sagawa with this character model of Janugani in order to potentially generate a sign language video translation of text or to otherwise demonstrate the sign language being translated (Janugani, par. 0034).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 2022/0391612 (Chakrabarty) teaches a device and method for translating sign language to text using a variety of sign language elements.
US 2018/0301061 (Paudyal) teaches a system and method for translating sign language to text using a variety of sign language elements.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIE DOSHER whose telephone number is (571) 272-4842. The examiner can normally be reached Monday - Friday, 10 a.m. - 6 p.m. ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dmitry Suhol can be reached at (571) 272-4430. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.G.D./Examiner, Art Unit 3715
/DMITRY SUHOL/Supervisory Patent Examiner, Art Unit 3715