DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 – 9, 11 – 16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract without significantly more.
When considering subject matter eligibility under 35 USC 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter.
Specifically, claims 1 – 9, 11 – 16 are directed to a method/apparatus. They hereby fall under at least one of the four statutory classes of invention.
If the claim does not fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea).
Claims 1 – 9, 11 – 16 recite steps of observation, evaluation, and judgement that can be practically performed by a human, either mentally or with the use of pen and paper.
The limitation of “estimating an input format of the input field based on the information; generating a speech recognition dictionary based on the estimated input format; executing speech recognition by using the speech recognition dictionary with respect to speech of a user, and inputting, in the input field, text of an obtained speech recognition result in accordance with the identifier” in claims 1 – 9, 11 – 16, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting “computer readable storage medium including computer executable instructions, wherein the instructions, when executed by a processor”, nothing in the claim element precludes the steps from practically being performed in a human mind.
The mere nominal recitation of a generic processing circuit do not take the claim limitations out of the mental processes grouping.
If a claim limitation, under its broadest reasonable interpretation, covers mental processes but for the recitation of generic computer components, then it falls within the "Mental Processes" grouping of abstract ideas (concepts performed in the human mind including an observation, evaluation, judgement, and opinion). Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements “acquiring one or more items and information about a value of an input field for the one or more items from a recording data sheet including the input field of speech input respectively for the items; displaying the recording data sheet by a display and displaying the estimated input format in the input field included in the recording data sheet”.
The limitation “acquiring one or more items and information about a value of an input field for the one or more items from a recording data sheet including the input field of speech input respectively for the items”, amount to data-gathering steps which is considered to be insignificant extra-solution activity, (See MPEP 2106.05(g)).
The limitation “displaying the recording data sheet by a display and displaying the estimated input format in the input field included in the recording data sheet”, represents an extra-solution activity because it is a mere nominal or tangential addition to the claim, a mere generic transmission and presentation of collected and analyzed data. (See MPEP 2106.05 (g)).
Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. See MPEP 2106.05(g).
The insignificant extra-solution activities identified above, which include the data-gathering (acquiring, inputting), and displaying steps, are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (See MPEP 2106.05(d)(II) (i) Receiving or transmitting data over a network, e.g., using the Internet to gather data, buySAPE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPO2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); (v) Presenting (displaying) offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPO2d at 1092- 93). The claims are not patent eligible.
Claims 1 – 9, 11 – 16 do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processing circuit to perform the acquiring, inputting, generating, and displaying steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept.
Even when considered in combination, these additional elements (computer readable storage medium including computer executable instructions, wherein the instructions, including a processing circuit) represent mere instruction to apply an exception and insignificant extra-solution activity, which do not provide an inventive concept.
Claim 1 – 9, 11 – 16, as a whole, do not amount to significantly more than the abstract idea itself. This is because the claims do not affect an improvement to the functioning of a computer itself; and the claims do not move beyond a general link of the use of an abstract idea to a particular technological environment.
Claim 10 is considered to be statutory by converting text to speech and speech to text using speech recognition and speech synthesis. This is transformation from one thing to another.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1 – 16 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 - 15 of U.S. Patent No. 12,159,629. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1 – 16 of the instant application are similar in scope and content of the claims of the cited US patent.
It would have been obvious to an artisan at the time the invention was made to use the teaching of claims 1 - 15 of the '629' Patent as a general teaching for generating a template, to perform method as claimed in the present invention. The instant claims obviously encompass the claimed invention of the '629' Patent and differ only in the method steps. The extent that the instant claims are broaden and therefore generic to claimed invention of '629' Patent [species], In re Goodman 29 USPQ 2d 2010 CAFC 1993, states that a generic claim cannot be issued without a terminal disclaimer, if a species claim has been previously been claimed in a patent application. And since the structure is as recited, the method step is obtained and therefore, obvious.
Here is a comparison between claim 5 of the instant application and claim 15 of the cited patent.
Instant Application 18/589,692
Patent 12,159,629
Comparison
5. A non-transitory computer readable storage medium including computer executable instructions, wherein the instructions, when executed by a processor, cause the processor to perform a method comprising:
1.A non-transitory computer readable medium including computer executable instructions, wherein the instructions, when executed by a processor, cause the processor to perform a method comprising:
Same
acquiring one or more items and information about a value of an input field for the one or more items from a recording data sheet including the input field of speech
input respectively for the items; estimating an input format of the input field based on the information;
generating a template, regarding a recording data sheet including a plurality of items, for one or more of the items that can be specified, with reference to an input order of input target items selected from the items;
Similar
generating a speech recognition dictionary based on the estimated input format, and executing speech recognition by using the speech recognition dictionary with respect to speech of a user, and inputting, in the input field, text of an obtained speech recognition result in accordance with the
Identifier.
performing a speech recognition on an utterance of a user and generate a speech recognition result including a result of classifying, based on the template, the utterance into a range specifying utterance and a value utterance indicative of a value to be input; and determining an input target range relating to a plurality of items specified by the utterance of the user among the items based on the range specifying utterance.
Similar
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1 – 16 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Watanabe et al. (US PAP 2023/0014452).
As per claims 1, 15, and 16, Watanabe et al. teach a non-transitory computer readable storage medium including computer executable instructions, wherein the instructions, when executed by a processor, cause the processor to perform a method comprising:
acquiring one or more items and information about a value of an input field for the one or more items from a recording data sheet including the input field of speech
input respectively for the items (“generates a template relating to one or more items that can be specified, with reference to the input sequence list and based on the input order relating to the input target items… The input unit 109 performs various data inputs, and also performs a value input relating to a value utterance in the input target range.”; paragraphs 51 – 55); and
estimating an input format of the input field based on the information (“the decision unit 107 decides an input target range, if the recognition result contains information about which of the character strings of the input formats of the value dictionary”; paragraphs 127 – 131, 147- 149).
As per claim 2, Watanabe et al. further disclose an input sequence configured to input the values in the plurality of input fields included in the recording data sheet is retained in a memory of the computer, and the input sequence includes an identifier configured to identify the input field of an input target item included in the recording data sheet and includes the estimated input format (“The sequence number indicates the order of inputting the data items 21 into the form data 20. The input position is an identifier that uniquely specifies the input position 22 shown in FIG. 2. If the input position 22 is specified by a column index and a row number, for example, the identifier “D2” is used…the determination unit 106, and the decision unit 107 performs an input target range decision process based on the input target item and the speech recognition result. As a result of the decision process, an input target range, and a value utterance, which is an utterance relating to a value to be input to the input position, are generated.”; paragraphs 62 – 68, 72 - 86).
As per claim 3, Watanabe et al. further disclose the method further comprises selecting a range of the plurality of input fields from the recording data sheet, the acquiring includes acquiring the information correspondingly to the input field of the selected range and the estimating includes estimating the input format common to the input field of the range based on the correspondingly acquired information (“In step S505, the generation unit 105, the determination unit 106, and the decision unit 107 performs an input target range decision process based on the input target item and the speech recognition result. As a result of the decision process, an input target range, and a value utterance, which is an utterance relating to a value to be input to the input position, are generated”; paragraphs 61 – 68, 72 - 86).
As per claim 4, Watanabe et al. further disclose the input sequence further includes guidance including an item name of the input target item, and the selecting includes selecting the range of the input field based on the guidance or the identifier (“it suffices that the template includes a sequence number, an identifier of the input position, a content of the guidance, etc. in the input sequence list 30 shown in FIG. 3”; paragraphs 61 – 68).
As per claim 5, Watanabe et al. further disclose generating a speech recognition dictionary based on the estimated input format, and executing speech recognition by using the speech recognition dictionary with respect to speech of a user, and inputting, in the input field, text of an obtained speech recognition result in accordance with the
Identifier (“the decision unit 107 collates a speech recognition result with the decision dictionary, and decides whether the speech recognition result includes a range specifying utterance which is an utterance that includes an intention to specify a range. Specifically, if the speech recognition result includes a portion matching with a range specifying template, the matching portion of the speech recognition result is decided as a range specifying utterance.”; paragraphs 71 – 87).
As per claim 6, Watanabe et al. further disclose the information is an input value example which is an example of the value of the input field, and the acquiring includes acquiring the value as the input value example from at least one of the recording data sheet, which includes the value of the input field input in advance, and a user interface which receives input of the value in the input field (“If “skip” is acquired as a value utterance, an input unit 109 inputs nothing into the input target range, or may input a symbol meaning no data, such as “N/A”.”; paragraph 158).
As per claim 7, Watanabe et al. further disclose the information is a format configuration which is used to display the value of the input field, and the acquiring includes acquiring, as the information, the format configuration included in the
recording data sheet (“input format in the input sequence list corresponding to the specified input target range may also be displayed as a content of a value that can be currently recognized…a content based on an available input format can be displayed at a timing when the input target range is updated.”; paragraphs 167, 184).
As per claim 8, Watanabe et al. further disclose the information includes an input value example which is an example of the value of the input field and includes a format configuration which is used to display the value of the input field, and the acquiring includes acquiring the information including the input value example and the format configuration from the recording data sheet (“If “skip” is acquired as a value utterance, an input unit 109 inputs nothing into the input target range, or may input a symbol meaning no data, such as “N/A”… input format in the input sequence list corresponding to the specified input target range may also be displayed as a content of a value that can be currently recognized”; paragraphs 158, 167, 184).
As per claim 9, Watanabe et al. further disclose the method further comprises checking, the input sequence further includes guidance about the speech input in the input field, and the checking includes checking the estimated input format based on the guidance (“it is assumed that available items as guidances are acquired from form data and an input sequence list is generated before operation… The input format is a format to accept an utterance having a content constituted with a specific grammar in the speech recognition process, and used to generate a speech recognition dictionary. For example, the input format specifies words of “date”, “terms (‘no anomaly’|‘exchange required’)”, “terms (‘operation normal’|‘operation abnormal’)”, etc. The input format also specifies a pattern recognized by the speech recognition unit 103, such as “a numerical value (three-digit integer)”, “a numerical value (two-digit integer and single-digit decimal fraction)”, “five alphanumeric characters”, etc.”; paragraphs 59 -66, 114, 128 – 139).
As per claim 10, Watanabe et al. further disclose generating an input value example from the estimated input format, subjecting the generated input value
example to speech synthesis, executing speech recognition by using the speech recognition dictionary with respect to speech data obtained by the speech synthesis, and checking the estimated input format by judging whether the text of the obtained speech recognition result matches the generated input value example or not (“the speech synthesis unit 104 reproduces a confirmation message which prompts the user to confirm the input target range. The confirmation message may be a simple fixed phrase, such as “Is this OK?” Alternatively, the speech synthesis unit 104 may synthesize a speech for a message including the input target range so as to vocally repeat the input target range specified by the user, and may reproduce the synthetic speech.”; paragraphs 70 – 75).
As per claim 11, Watanabe et al. further disclose displaying the recording
data sheet by a display and displaying the estimated input format in the input field included in the recording data sheet (“displays to the user a content that can be recognized by the speech recognition based on the input format of the current input target item.”; paragraphs 159 – 162).
As per claim 12, Watanabe et al. further disclose generating one or more input value examples from the estimated input format and displaying the generated input value examples in the input field included in the recording data sheet by a display (“input format in the input sequence list corresponding to the specified input target range may also be displayed as a content of a value that can be currently recognized.”; paragraphs 159 – 162, 167).
As per claim 13, Watanabe et al. further disclose generating a range of the
value which can be input in the input field from the estimated input format and displaying the generated range in the input field included in the recording data sheet
by a display (“displays to the user a content that can be recognized by the speech recognition based on the input format of the current input target item…input format in the input sequence list corresponding to the specified input target range may also be displayed as a content of a value that can be currently recognized.”; paragraphs 159 – 162, 167).
As per claim 14, Watanabe et al. further disclose displaying by a display, an input sequence including an identifier which identifies the input field of an input target item included in the recording data sheet, the estimated input format, and an order of inputting the values in the plurality of input fields included in the recording data sheet is retained in a memory of the computer, and the displaying by the display includes displaying the recording data sheet and overlaying, by the display, the order of inputting the values on the plurality of input fields included in the recording data sheet based on the input sequence (“The sequence number indicates the order of inputting the data items 21 into the form data 20. The input position is an identifier that uniquely specifies the input position 22 shown in FIG. 2. If the input position 22 is specified by a column index and a row number, for example, the identifier “D2” is used... displays to the user a content that can be recognized by the speech recognition based on the input format of the current input target item…input format in the input sequence list corresponding to the specified input target range may also be displayed as a content of a value that can be currently recognized.”; paragraphs 61 – 68, 159 – 162, 167).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Beaumaraiage et al. teach method for audible prescription label information using RFID prescription packaging. Agrawal et al. teach DYNAMIC INFERENCE OF VOICE COMMAND FOR SOFTWARE OPERATION FROM USER MANIPULATION OF ELECTRONIC DEVICE. Walker et al. teach System For Vending Physical And Information Items. Kelley teaches Interactive Knowledge Base System. Ehsani et al. teach Phrase-based Dialogue Modeling With Particular Application To Creating A Recognition Grammar For A Voice-controlled User Interface.
8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEONARD SAINT-CYR whose telephone number is (571)272-4247. The examiner can normally be reached Monday- Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571)272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LEONARD SAINT-CYR/Primary Examiner, Art Unit 2658