DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114 was filed in this application after appeal to the Patent Trial and Appeal Board, but prior to a decision on the appeal. Since this application is eligible for continued examination under 37 CFR 1.114 and the fee set forth in 37 CFR 1.17(e) has been timely paid, the appeal has been withdrawn pursuant to 37 CFR 1.114 and prosecution in this application has been reopened pursuant to 37 CFR 1.114. Applicant’s submission filed on 11-05-2025 has been entered.
Response to Arguments
Applicant's arguments filed 11-5-2025 have been fully considered but they are not persuasive.
As to the arguments directed to Ekron, the PCT/JP2021/028304 have priority to 07-30-2021, if applicant want to rely on the date of JP2021-036577 of 03-08-2021 a certified translation of every foreign benefit application or Patent Cooperation Treaty (PCT) application not filed in English is required. See 35 U.S.C. 119(b)(3) and 372(b)(3) and 37 CFR 1.55(g)(3)(i) and 41.154(b). If no certified translation is in the official record for the application, the examiner must require the applicant to file a certified translation. The applicant should provide the required translation if applicant wants the application to be accorded benefit of the non-English language application. Any showing of priority that relies on a non-English language application is prima facie insufficient if no certified translation of the application is on file. See 37 CFR 41.154(b) and 41.202(e). the examiner was unable to find a certified translation of the original application. Also, a new ground of rejection is been added.
The rest of applicant's arguments are directed against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 30 and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Katayama 20100156787 in view of Osterhout 20190278093 and further in view of Fan 20220391161 and further in view of Ekron 20220366131..
As to claim 30, Katayama discloses a terminal device [1] comprising:
a wearable object [2] to be mounted on a head of a user (see fig. 1);
a display apparatus [2b] provided in the wearable object to display a first screen [6] (see fig. 3, par. 0035);
a mobile terminal [3, 150] that is separate from the wearable object (see fig.1, par. 0036); and
an image capturing apparatus [4] provided in the wearable object to capture an image of a finger or a specific input instructing tool when the user performs an operation on a second screen [8,9 virtual keyboard screens] with the finger or the input instructing tool, and to output image data of the captured image to the mobile terminal wirelessly or wired (see fig.3, par. 0037),
wherein the mobile terminal is connected to the display apparatus wirelessly or wired (see fig. 1), and has a function of controlling the display apparatus to display on the display apparatus as the first screen a screen displayed on the display unit of the mobile terminal, a function of displaying a setting screen on the display unit of the mobile terminal (see par. 0035, 0037, 0058), and function of controlling, for the setting made by the user through the setting screen displayed on the display apparatus, the display apparatus to display on the display apparatus, the first screen according to the setting made by the user through the setting (see fig.3 and 4, par. 0041-0044), and
the mobile terminal includes: a storage unit configured to store various types of data including data on the first screen (see fig. 5);
an operation determination unit [208] configured to, when the image capturing apparatus captures the image of the finger or the input instructing tool with which the user performs an operation on the second screen, determine what content of the operation is performed with the finger or the input instructing tool among the various types of operations, based on the image data of the captured image (see fig.7, par. 0078};
a position data generation unit [207] configured to, when the image capturing apparatus captures the image of the finger or the input instructing tool with which the user performs an operation on the second screen, generate position data of the finger or the input instructing tool within an image capture range that is a range in which the image capturing apparatus is able to capture an image, based on the image data of the captured image (see fig.7, par. 0078};
a reference data generation unit configured to, when the user performs an operation at one or more predetermined positions on the second screen with the finger or the input instructing tool, generate data on the second screen for identifying a position and a size of the second screen and store the generated data as reference data in the storage unit, by using the position data of the finger or the input instructing tool generated by the position data generation unit based on the image data for which the operation defemination unit determines that the operation performed at each of the predetermined positions is a predetermined operation (see par. 0109, 0124); and
an input control unit [214] configured to, when the user performs an operation on the second screen with the finger or the input instructing tool, recognize a content of an input instruction corresponding to the operation performed with the finger or the input instructing tool, by identifying a range of the second screen within the image capture range and retrieving a position where the operation is performed within the identified range of the second screen with the finger or the input instructing tool, based on data on the content of the operation performed with the finger or the input instructing tool, obtained as determined by the operation determination unit, the position data of the finger or the input instructing tool generated by the position data generation unit, the reference data on the second screen stored in the storage unit, and the data on the first screen corresponding to the second screen stored in the storage unit; and control a screen to be displayed on the display unit and the first screen to be displayed on the display apparatus, according to the recognized content of the input instruction (see par. 0078, 0094-0096); the first screen [6] is a screen that is projected or displayed on the display apparatus (see par. 0035, 0037), and the second screen is a virtual screen that appears to be floating in midair to the user wearing the wearable object when the display apparatus project or displays the first screen (see fig. 3-4; par. 0041). Katayama fails to disclose the mobile terminal equipped with a display unit. In an analogous art, Osterhout discloses a mobile terminal [104, 108] that is separate from the wearable object [102} and is equipped with a display unit [14 (see par. 0099, 0104). Therefore, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the present invention to combine the teachings in order to facilitate user input as suggested by Osterhout. The previous references fail to disclose the setting screen having a plurality of buttons that control display settings. In another analogous art, Fan discloses a wearable object (see par. 0060); the setting screen having a plurality of buttons that control display settings (see fig. 5(b), 6(a); brightness, screen mirroring), wherein activating one of the buttons [502] causes a part of the screen displayed on the display unit of the mobile terminal to be displayed as the first screen (see par. 0119), and the mobile terminal has a function of controlling, for the setting made by the user through the setting screen displayed on the display unit of the mobile terminal (see fig. 6(b); par. 0021-0122). Therefore, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the present invention to combine the teachings for the simple purpose of facilitate activation of the screen mirroring function.
Katayama discloses wherein the setting screen related to screen display of the display apparatus is configured to select the setting of using as the first screen a part of a screen displayed on the display unit, and also a setting of using as the first screen a screen in which the screen displayed on the display unit is different and a setting of using as the second screen a screen in which a character and/or a chart are enlarged in the screen displayed on the display unit (see par. 0046, 0071-0074, 0084, 0116, 0123-0124). Based on the new interpretation Katayama fails to disclose enlarging on the first screen, only enlarges the virtual keyboard [second] screen. In an analogous art, Ekron discloses enlarging characters (see par. 0428, 0450). Therefore, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the present invention to combine the teachings in order to facilitate viewing the screen to people with different types of vision as disclosed by the reference.
As to claim 37, Katayama discloses the terminal device according to claim 30, wherein the setting screen related to screen display of the display apparatus is configured to select the setting of using as the first screen a part of a screen displayed on the display unit, and also a setting of using as the first screen a screen in which the screen displayed on the display unit is different and a setting of using as the second screen a screen in which a character and/or a chart are enlarged in the screen displayed on the display unit (see par. 0046, 0071-0074, 0084, 0116, 0123-0124). Based on the new interpretation Katayama fails to disclose enlarging on the first screen, only enlarges the virtual keyboard [second] screen. In an analogous art, Ekron discloses enlarging characters (see par. 0428, 0450). Therefore, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the present invention to combine the teachings in order to facilitate viewing the screen to people with different types of vision as disclosed by the reference.
Claim(s) 30 and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Katayama 20100156787 in view of Osterhout 20190278093 and further in view of Fan 20220391161 and further in view of Ikeda 20160025983.
As to claim 30, Katayama discloses a terminal device [1] comprising:
a wearable object [2] to be mounted on a head of a user (see fig. 1);
a display apparatus [2b] provided in the wearable object to display a first screen [6] (see fig. 3, par. 0035);
a mobile terminal [3, 150] that is separate from the wearable object (see fig.1, par. 0036); and
an image capturing apparatus [4] provided in the wearable object to capture an image of a finger or a specific input instructing tool when the user performs an operation on a second screen [8,9 virtual keyboard screens] with the finger or the input instructing tool, and to output image data of the captured image to the mobile terminal wirelessly or wired (see fig.3, par. 0037),
wherein the mobile terminal is connected to the display apparatus wirelessly or wired (see fig. 1), and has a function of controlling the display apparatus to display on the display apparatus as the first screen a screen displayed on the display unit of the mobile terminal, a function of displaying a setting screen on the display unit of the mobile terminal (see par. 0035, 0037, 0058), and function of controlling, for the setting made by the user through the setting screen displayed on the display apparatus, the display apparatus to display on the display apparatus, the first screen according to the setting made by the user through the setting (see fig.3 and 4, par. 0041-0044), and
the mobile terminal includes: a storage unit configured to store various types of data including data on the first screen (see fig. 5);
an operation determination unit [208] configured to, when the image capturing apparatus captures the image of the finger or the input instructing tool with which the user performs an operation on the second screen, determine what content of the operation is performed with the finger or the input instructing tool among the various types of operations, based on the image data of the captured image (see fig.7, par. 0078};
a position data generation unit [207] configured to, when the image capturing apparatus captures the image of the finger or the input instructing tool with which the user performs an operation on the second screen, generate position data of the finger or the input instructing tool within an image capture range that is a range in which the image capturing apparatus is able to capture an image, based on the image data of the captured image (see fig.7, par. 0078};
a reference data generation unit configured to, when the user performs an operation at one or more predetermined positions on the second screen with the finger or the input instructing tool, generate data on the second screen for identifying a position and a size of the second screen and store the generated data as reference data in the storage unit, by using the position data of the finger or the input instructing tool generated by the position data generation unit based on the image data for which the operation defemination unit determines that the operation performed at each of the predetermined positions is a predetermined operation (see par. 0109, 0124); and
an input control unit [214] configured to, when the user performs an operation on the second screen with the finger or the input instructing tool, recognize a content of an input instruction corresponding to the operation performed with the finger or the input instructing tool, by identifying a range of the second screen within the image capture range and retrieving a position where the operation is performed within the identified range of the second screen with the finger or the input instructing tool, based on data on the content of the operation performed with the finger or the input instructing tool, obtained as determined by the operation determination unit, the position data of the finger or the input instructing tool generated by the position data generation unit, the reference data on the second screen stored in the storage unit, and the data on the first screen corresponding to the second screen stored in the storage unit; and control a screen to be displayed on the display unit and the first screen to be displayed on the display apparatus, according to the recognized content of the input instruction (see par. 0078, 0094-0096); the first screen [6] is a screen that is projected or displayed on the display apparatus (see par. 0035, 0037), and the second screen is a virtual screen that appears to be floating in midair to the user wearing the wearable object when the display apparatus project or displays the first screen (see fig. 3-4; par. 0041). Katayama fails to disclose the mobile terminal equipped with a display unit. In an analogous art, Osterhout discloses a mobile terminal [104, 108] that is separate from the wearable object [102} and is equipped with a display unit [14 (see par. 0099, 0104). Therefore, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the present invention to combine the teachings in order to facilitate user input as suggested by Osterhout. The previous references fail to disclose the setting screen having a plurality of buttons that control display settings. In another analogous art, Fan discloses a wearable object (see par. 0060); the setting screen having a plurality of buttons that control display settings (see fig. 5(b), 6(a); brightness, screen mirroring), wherein activating one of the buttons [502] causes a part of the screen displayed on the display unit of the mobile terminal to be displayed as the first screen (see par. 0119), and the mobile terminal has a function of controlling, for the setting made by the user through the setting screen displayed on the display unit of the mobile terminal (see fig. 6(b); par. 0021-0122). Therefore, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the present invention to combine the teachings for the simple purpose of facilitate activation of the screen mirroring function.
Katayama discloses wherein the setting screen related to screen display of the display apparatus is configured to select the setting of using as the first screen a part of a screen displayed on the display unit, and also a setting of using as the first screen a screen in which the screen displayed on the display unit is different and a setting of using as the second screen a screen in which a character and/or a chart are enlarged in the screen displayed on the display unit (see par. 0046, 0071-0074, 0084, 0116, 0123-0124). Based on the new interpretation Katayama fails to disclose enlarging on the first screen, only enlarges the virtual keyboard [second] screen. In an analogous art, Ikeda discloses enlarging characters (see par. 0083, 0085, 0089, 0182). Therefore, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the present invention to combine the teachings in order to facilitate viewing the screen to people with different types of vision as disclosed by the reference.
As to claim 37, Katayama discloses the terminal device according to claim 30, wherein the setting screen related to screen display of the display apparatus is configured to select the setting of using as the first screen a part of a screen displayed on the display unit, and also a setting of using as the first screen a screen in which the screen displayed on the display unit is different and a setting of using as the second screen a screen in which a character and/or a chart are enlarged in the screen displayed on the display unit (see par. 0046, 0071-0074, 0084, 0116, 0123-0124). Based on the new interpretation Katayama fails to disclose enlarging on the first screen, only enlarges the virtual keyboard [second] screen. In an analogous art, Ikeda discloses enlarging characters (see par. 0083, 0085, 0089, 0182). Therefore, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the present invention to combine the teachings in order to facilitate viewing the screen to people with different types of vision as disclosed by the reference.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARCOS L TORRES whose telephone number is (571)272-7926. The examiner can normally be reached 10:00 AM - 6:00 PM M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alison Slater can be reached on (571)270-0375. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MARCOS L. TORRES
Primary Examiner
Art Unit 2647
/MARCOS L TORRES/Primary Examiner, Art Unit 2647