Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. This communication is responsive to the Amendment filed 1/13/2026.
2. Claims 1-20 are pending in this application. Claims 1, 8 and 15 are independent claims. In the instant Amendment, claims 1, 7, 8, 14 and 15 were amended. This is a Non-Final action on the RCE filed 2/17/2026.
Claim Rejections - 35 USC § 103
3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al (“Jiang” CN106782553) in view of Buckley et al (“Buckley” US 10,101,897).
Regarding claim 1, Jiang discloses a text language type switching method, comprising:
receiving a voice input of a user (see fig 1, 101; e.g., voice input);
displaying a first text segment in response to the typing input (see fig 1, 101; e.g., display converted text);
receiving a first input of the user, wherein the first input of the user selects a part of the first segment (see fig 1, 102; e.g., receive selection instruction of character segment);
determining the selected part of the first text segment as a to-be-switched text segment of a first language type from the first text segment in response to the first input (see fig 1, 103; e.g., “receiving text selection command, determining the text selection command specified by the character information as the target words, can be from the voice data extracting the character segment corresponding to voice data segment, also can be other ways. For example, the user determines the character segment, it can through gesture operation converting the character segment to the gesture operation corresponding to the target word of the language type. As shown in FIG. 3, the left half in the figure "wiki" is a user-determined character segment, the user can indicate using English "wiki" corresponding voice data fragment to perform speech recognition using finger-painting two circles above the touch screen. The right half shown, the "wiki" corresponding voice data identifying the fragment using English and converts it into ‘wedgit’”);
receiving a second input of the user (see paragraphs [0072]-[0080]; e.g., receiving a language type instruction);
switching the to-be-switched text segment to a target text segment of a second language type in response to the second input (see figs 3 and 4A; e.g., “the user determines the character segment, it can [be] through gesture operation converting the character segment to the gesture operation corresponding to the target word of the language type”); and
displaying the target text segment of the second language as a displayed text segment, in response to the second input (see figs 4A and 4B; e.g., display character segment based on language type instruction generated from selecting language type option).
Jiang does not expressly disclose receiving a typing input; and switching a cursor to the end of a displayed text segment in response to the second input.
However, Buckley discloses receiving a typing input (see col. 3, lines 29-41; e.g., “the display 106 may display text 114. The user may have typed the text 114, in this example, “A portion of this text will be highlighted,” into the keyboard 110, prompting the first computing system 100 to display the text 114 on the display 106.”); and
switching a cursor to the end of a displayed text segment in response to the second input (see col. 5 line 65 to col. 6, line 9; e.g., “initiating the paste operation (second input) in response to recognizing a gesture or button push (control indication)…. In response to the copied text 172 (text segment) appearing at the cursor location 164, the cursor location 164 may move to the end of the copied text 172 (text segment).”). It would have been obvious to a person having ordinary skill in the art at the time of the present invention to include Buckley’s teachings in Jiang’s user interface as a well-known user-input alternative, yielding a predictable result.
Regarding claim 2, Jiang discloses wherein: the typing input comprises a first typing sub-input, the displaying a first text segment in response to the typing input comprises: determining an original text segment according to the first typing sub-input, and determining the to-be-switched text segment of the first language type corresponding to the original text segment, and the switching the to-be-switched text segment to a target text segment of a second language type comprises: determining, according to the original text segment corresponding to the to-be- switched text segment, a target text segment of the second language type corresponding to the original text segment (see claim 1 above).
Regarding claim 3, Jiang discloses wherein when the first language type is English, the to-be-switched text segment is the original text segment; and when the second language type is English, the target text segment is the original text segment (inherent feature).
Regarding claim 4, Jiang discloses wherein when the original text segment corresponds to a text segment of at least one second language type, the determining, according to the original text segment corresponding to the to-be-switched text segment, a target text segment of the second language type corresponding to the original text segment comprises: displaying the text segment of at least one second language type that matches the original text segment corresponding to the to-be-switched text segment; receiving a third input of the user; and determining the target text segment of the second language type from the text segment of at least one second language type in response to the third input (see claim 1 above).
Regarding claim 5, Jiang discloses wherein the first text segment comprises at least two sub-text segments that are sequentially input by the user, and the determining the selected part of the first text segment as a to-be-switched text segment of a first language type from the first text segment in response to the first input comprises: determining, according to a target type of the first input, a target sequence number corresponding to the target type, and determining a sub-text segment corresponding to the target sequence number that is in the at least two sub-text segments as the to-be-switched text segment (inherent feature).
Regarding claim 6, Jinag discloses further comprising determining the sub-text segment, comprising:
when an input event of acknowledgment text is monitored, determining, as the sub- text segment, text between latest input text and text that is input for the first time after the input event is previously monitored (inherent feature).
Regarding claim 7, Jiang discloses displaying a control indication that indicates the second language type (see figs 3 and 4A-4B; e.g., voice input button allows user to select language); wherein the second input of the user that is received comprises as least one of: the user selecting the control indication that indicates the second language type, or the user selecting a shift key (see figs 3 and 4A-4B; e.g., voice input button allows user to select language).
Claims 8-14 are similar in scope to claims 1-7, respectively, and are therefore rejected under similar rationale.
Claims 15-20 are similar in scope to claims 1-6, respectively, and are therefore rejected under similar rationale.
Response to Arguments
7. Applicant’s arguments with respect to the claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
8. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Sumner (US 2002/0095291).
9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RASHAWN N TILLERY whose telephone number is (571)272-6480. The examiner can normally be reached M-F 9:00a - 5:30p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William L Bashore can be reached on (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RASHAWN N TILLERY/Primary Examiner, Art Unit 2174