DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/31/25 has been entered.
Claim Status
Applicant’s response and amendments were received on 12/31/2025 and entered. Claims 2-8, 10-14, 18-20 have been amended. Claim 21 has been added. Claims 1,9, 15-17 have been cancelled. Claims 2-8, 10-14, and 18-21 are pending. All these claims are rejected.
Response to Arguments
Applicant’s arguments regarding 103 rejections are moot, and the Examiner has introduced new grounds of rejections based on additional citation and a new obviousness analyses..
Compact Prosecution
With respect to Claim Interpretation, the Examiner has provided some notes regarding “[BRI on the record]” throughout the Office Action, so that the record is clear about the scope of the claimed invention, and the record is also clear about the basis for the Examiner’s analyses. A clear record of the claim interpretation could expedite the examination by creating the condition to allow the examination to focus on Applicant’s inventive concept and its comparison with related prior art.
If there are disagreements, Applicant may present an alternative interpretation based on MPEP 2111. The Examiner will adopt Applicant’s interpretation on the record, if Applicant’s interpretation is reasonable and/or arguments are persuasive.
Applicant may amend claims relying on the Examiner’s claim interpretation provided on the record.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2-8, 12-14, and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Kasahara et al. (WO 2014162825 A1) in view of Tucker (US 20220076455 A1) and Jiang et al. (US 20160269631 A1).
Regarding Claim 20, Kasahara teaches A wearable terminal device configured to be used by being worn by a user (
PNG
media_image1.png
616
954
media_image1.png
Greyscale
“The client 200 is a wearable terminal (hereinafter also simply referred to as a wearable terminal 200). The wearable terminal 200 includes, for example, either or both of an imaging unit and a display unit, and functions as either or both of the above (1) and (3). In the illustrated example, the wearable terminal 200 is a glasses type, . . ..” Kasahara p. 3.), the wearable terminal device comprising:
a transparent display through which the user views a real space (
“Alternatively, when the display is a transmissive type, the wearable terminal 200 may transparently superimpose and display the annotation on the real-world image that the user is directly viewing.” Kasahara p. 3.);
a camera configured to capture a real-time moving image of the real space viewed by the user (“When functioning as the device of (1) above, the wearable terminal 200 includes, for example, a camera installed in a frame portion of glasses as an imaging unit. With this camera, wearable terminal 200 can acquire an image in real space from a position close to the user's viewpoint.” Kasahara p. 3. “The imaging unit 960 is a camera module that captures an image. The imaging unit 960 images a real space using an imaging element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor), and generates a captured image. A series of captured images generated by the imaging unit 960 constitutes a video.” Kasahara p. 5.);
at least one processor (“Further, the processor 910 executes display control for realizing display of an AR image as described later in the server 100, the wearable terminal 200, the tablet terminal 300, the mobile phone 400, the laptop PC 500, or the projector 700, for example.” Kasahara p. 4-5.), wherein the at least one processor is configured to:
detect a visible area of the real space viewed by the user through the transparent display (
PNG
media_image2.png
570
796
media_image2.png
Greyscale
“FIG. 31 and FIG. 32 are diagrams illustrating an application example of displaying an annotation outside the visible range according to an embodiment of the present disclosure. In the illustrated example, the display of the annotation changes while the image 1200 viewed by the user of the wearable terminal 200 changes from the image 1200a to the image 1200b and further to the image 1200c. In the image 1200, a pointer 1210, a direction display 1230, and a comment 1220 are displayed as annotations.” Kasahara p. 22.
The claimed “visible area for the user inside a space” corresponds to fig. 21 1200c, “the image 1200 viewed by the user of the wearable terminal 200.”),
determine that the position of the instruction image is in the real space is outside the visible area (
However, 1220 is initially outside of the visible areas as in 1200a and 1200b. The “first notification,” mapped to the arrow 1230a and/or 1230b, makes “the user aware of existence” of (1220 + annotation target).
“Note that the pointer 1210 is continuously displayed near the center of the image 1200, for example, as an icon indicating a user's gaze area, unlike some examples described above. The user of the wearable terminal 200 is guided by the direction display 1230 so that the annotation target (pan (PAN) in the illustrated example) input by the user of the tablet terminal 300 enters the pointer 1210, for example.” Kasahara p. 22.
Kasahara discloses annotation like “Check” is tied to real space location associated with objects like the pan in fig. 31, stating “In the wearable terminal 200, annotations input on the tablet terminal 300 are displayed on the image 1200 as a pointer 1210 and a comment 1220. The position where these annotations are displayed in the image 1200 corresponds to the position of the real space in the image 1300 displayed on the tablet terminal 300.” Kasahara p. 22.), and
display a notification image on a periphery of the transparent display, 1230a or 1230b) indicating existence of the instructional image (1220 + annotation target) outside of the visible area (1200a or 1200b) (
PNG
media_image2.png
570
796
media_image2.png
Greyscale
“FIG. 31 and FIG. 32 are diagrams illustrating an application example of displaying an annotation outside the visible range according to an embodiment of the present disclosure. In the illustrated example, the display of the annotation changes while the image 1200 viewed by the user of the wearable terminal 200 changes from the image 1200a to the image 1200b and further to the image 1200c. In the image 1200, a pointer 1210, a direction display 1230, and a comment 1220 are displayed as annotations.” Kasahara p. 22.
The claimed “instructional image” is mapped to an annotated image that includes fig. 31 1220 “CHECK,” an annotation, along with annotation target, a pan.
The claimed “visible area” corresponds to fig. 21 1200c, “the image 1200 viewed by the user of the wearable terminal 200.”
The “notification image,” mapped to the arrow 1230a and/or 1230b, makes “the user aware of existence” of (1220 + annotation target. The “notification image” is placed at a periphery of the transparent display corresponding to a direction in which the instructional image is located in the real space) as shown in fig. 31.).
Kasahara’s notification image (1230a or 1230b) does not explicitly teach a position of the periphery at which the notification image is disposedindicating a direction in which the instructional image is located in the real space.
Kasahara teaches that its notification image could be (Kasahara Fig. 28 1260a, 1260b, 1260c), which shows a position of the periphery at which the notification image is disposedindicating a direction in which the instructional image is located in the real space (
PNG
media_image3.png
317
298
media_image3.png
Greyscale
“FIG. 28 is a diagram illustrating a fourth example of displaying an annotation outside the viewable range according to an embodiment of the present disclosure. In the illustrated example, when an apple to be annotated (APPLE) is outside the image 1200, the end portion 1260 of the image 1200 closer to the apple shines. For example, in the image 1200a, since the apple is in the lower right direction of the screen, the lower right end portion 1260a shines. In the image 1200b, since the apple is in the upper left direction of the screen, the upper left end portion 1260b shines. In the image 1200c, since the apple is in the lower left direction of the screen, the lower left end portion 1260c shines.” Kasahara p. 21.
“In the above example, the region of the end portion 1260 can be set based on the direction in which the annotation target exists as viewed from the image 1200. Although the example in the oblique direction is shown in the figure, in another example, when the apple is in the left direction of the image 1200, the left end portion 1260 may shine. In this case, the end portion 1260 may be the entire left side of the image 1200.” Kasahara p. 21.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Kasahara’s periphery based notification image with Kasahara’s teaching based on Figs. 31-32. One of ordinary skill in the art would be motivated to reduce obstruction of a user’s view. “Thus, when notifying that an annotation exists outside the viewable range due to a change in the display of the end portion 1260, it is not necessary to display a separate direction using, for example, an arrow, which obstructs the display of the image 1200.” Kasahara p. 21.
After the combination of these embodiments, we could have:
PNG
media_image4.png
322
304
media_image4.png
Greyscale
).
Regarding “instructional image,” the Examiner is reading the limitation to mean: an image that provides instructions or explanations. The Examiner has mapped the virtual icon and message to request a user “Check” the pan on stove as part of the claimed “instructional image.” Although such a reminder/direction is one type of instructions, Applicant’s disclosure on “instructional image” appear to be images about teaching or explaining to a user on something. Kasahara does not explicitly and clearly disclose an “instructional image” that require teaching or explaining to a user about something.
Further, Kasahara does not explicitly disclose
transmit the real-time moving image to an external device configured to display the real-time moving image,
receive, from the external device based on a selected position within a still image of the real-time moving image, a command to display an instructional image on the transparent display at a position in the real space corresponding to the selected position in the still image,
Tucker teaches the “instructional image” that require teaching or providing explanation about a content (
PNG
media_image5.png
510
786
media_image5.png
Greyscale
“In other examples, overlaid, superimposed installation, trouble shooting, frequently asked questions (FAQ) information 1302 is overlaid, superimposed, etc. on the real world content 304 (see FIG. 13). In still further examples, a connection diagram 1402 (e.g., showing which cables are to be connected where) are overlaid, superimposed, etc. on the real world content 304 to assist, guide the user 108 in properly connecting the components C1, C2, G, etc. (see FIG. 14).” Tucker ¶ 50.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Tucker’s instructional images to teach or explain, with Kasahara. One of ordinary skill in the art would be motivated to more effectively teach or explain to a user by using augmented or mixed reality. Further, it could also make remote teaching/instruction more effective.
Kasahara in view of Tucker does not explicitly disclose
transmit the real-time moving image to an external device configured to display the real-time moving image,
receive, from the external device based on a selected position within a still image of the real-time moving image, a command to display an instructional image on the transparent display at a position in the real space corresponding to the selected position in the still image.
Jiang teaches
transmit the real-time moving image to an external device configured to display the real-time moving image (
PNG
media_image6.png
390
748
media_image6.png
Greyscale
“At the work site, the camera 21c of the operator 2 captures a camera image 2c presenting an environment of a work site, and the camera image 2c is transmitted from the operator terminal 20t to the instructor terminal 10t. The camera image 2c is displayed at the instructor terminal 10t.” Jiang ¶ 48.
“The operation supporting method in the first embodiment will be briefly described. In the first embodiment, integrated posture information 2e generated based on the camera image 2c, and the camera image 2c captured by the camera 21c are distributed from the operator terminal 201 to the remote support apparatus 101 at real time. From the remote support apparatus 101, instruction information 2f, which the instructor 1 inputs by pointing on the panorama image 4, is distributed to the operator terminal 201. Also, audio information 2v between the operator 2 and the instructor 1 is also interactively distributed at real time.” Jiang ¶ 74.
“The camera image 2c is captured by the camera 21c, and a stream of the multiple camera images 2c successively captured in time sequence is distributed as a video.” Jiang ¶ 76.
“With this camera, wearable terminal 200 can acquire an image in real space from a position close to the user's viewpoint. The acquired image is transmitted to the server 100.” Kasahara p. 3.),
receive, from the external device based on a selected position within a still image of the real-time moving image, a command to display an instructional image on the transparent display at a position in the real space corresponding to the selected position in the still image (
Fig. 1:
PNG
media_image7.png
196
282
media_image7.png
Greyscale
shows that the selected position that “HERE” points to within a still image (camera image 2c) for 1e Instruction detail added by the instructor. The HMD 21d is the transparent display.
Jang teaches an instructor’s command to display instructional image, mapped to “Instruction detail 1e,” stating “When the instructor 1 inputs an instruction detail 1e on the camera image 2c displayed at the instructor terminal 10t, instruction data 1d is sent to the operator terminal 20t. When the operator terminal 20t receives the instruction data 1d, an image generated by integrating the camera image 2c and the instruction detail 1e is displayed at the display device 21d.” Jiang ¶ 49.
Jang further explains, “When the circumstance of the work site, that is, the environment of the working place is shared between the operator 2 and the instructor 1, the instructor 1 indicates an operation target at the work site to solve the problem with respect to the camera image 2c displayed at the instructor terminal 10t (PHASE_2). In the PHASE_2, it is preferable to accurately point out the operation target in a location relationship with the operator 2.” Jiang ¶ 53.
Jang further explains, “FIG. 3 is a diagram for explaining an operation supporting method in a first embodiment. In a system 1001 illustrated in the first embodiment depicted in FIG. 3, a marker 7a is placed at a location to be the reference point at a working place 7. The marker 7a is used as a reference object representing the reference point, and includes information to specify a location and a posture of the operator 2 from the camera image 2c captured by the camera 21c. An AR marker or the like may be used, but is not limited to the AR marker. ” Jiang ¶ 70.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Jiang’s, with Kasahara in view Tucker. One of ordinary skill in the art would be motivated to provide remote instruction to an operator on site. This could improve work efficiency and safety.
Regarding Claim 21, Kasahara teaches A notification method for use in a wearable terminal device configured to be used by being worn by a user (
PNG
media_image1.png
616
954
media_image1.png
Greyscale
“The client 200 is a wearable terminal (hereinafter also simply referred to as a wearable terminal 200). The wearable terminal 200 includes, for example, either or both of an imaging unit and a display unit, and functions as either or both of the above (1) and (3). In the illustrated example, the wearable terminal 200 is a glasses type, . . ..” Kasahara p. 3.), the notification method comprising:
capturing a real-time moving image of a real space viewed by the user through a transparent display of the wearable terminal device (
“When functioning as the device of (1) above, the wearable terminal 200 includes, for example, a camera installed in a frame portion of glasses as an imaging unit. With this camera, wearable terminal 200 can acquire an image in real space from a position close to the user's viewpoint.” Kasahara p. 3. “The imaging unit 960 is a camera module that captures an image. The imaging unit 960 images a real space using an imaging element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor), and generates a captured image. A series of captured images generated by the imaging unit 960 constitutes a video.” Kasahara p. 5.
“Alternatively, when the display is a transmissive type, the wearable terminal 200 may transparently superimpose and display the annotation on the real-world image that the user is directly viewing.” Kasahara p. 3.);
detecting a visible area of the real space viewed by the user through the transparent display (
PNG
media_image2.png
570
796
media_image2.png
Greyscale
“FIG. 31 and FIG. 32 are diagrams illustrating an application example of displaying an annotation outside the visible range according to an embodiment of the present disclosure. In the illustrated example, the display of the annotation changes while the image 1200 viewed by the user of the wearable terminal 200 changes from the image 1200a to the image 1200b and further to the image 1200c. In the image 1200, a pointer 1210, a direction display 1230, and a comment 1220 are displayed as annotations.” Kasahara p. 22.
The claimed “visible area for the user inside a space” corresponds to fig. 21 1200c, “the image 1200 viewed by the user of the wearable terminal 200.);
determining that the position in the real space is outside the visible area (
However, 1220 is initially outside of the visible areas as in 1200a and 1200b. The “first notification,” mapped to the arrow 1230a and/or 1230b, makes “the user aware of existence” of (1220 + annotation target).
“Note that the pointer 1210 is continuously displayed near the center of the image 1200, for example, as an icon indicating a user's gaze area, unlike some examples described above. The user of the wearable terminal 200 is guided by the direction display 1230 so that the annotation target (pan (PAN) in the illustrated example) input by the user of the tablet terminal 300 enters the pointer 1210, for example.” Kasahara p. 22.
Kasahara discloses annotation like “Check” is tied to real space location associated with objects like the pan in fig. 31, stating “In the wearable terminal 200, annotations input on the tablet terminal 300 are displayed on the image 1200 as a pointer 1210 and a comment 1220. The position where these annotations are displayed in the image 1200 corresponds to the position of the real space in the image 1300 displayed on the tablet terminal 300.” Kasahara p. 22.); and
displaying a notification image on a periphery of the transparent display, 1230a or 1230b) indicating existence of the instructional image (1220 + annotation target) outside of the visible area (1200a or 1200b) ((
PNG
media_image2.png
570
796
media_image2.png
Greyscale
“FIG. 31 and FIG. 32 are diagrams illustrating an application example of displaying an annotation outside the visible range according to an embodiment of the present disclosure. In the illustrated example, the display of the annotation changes while the image 1200 viewed by the user of the wearable terminal 200 changes from the image 1200a to the image 1200b and further to the image 1200c. In the image 1200, a pointer 1210, a direction display 1230, and a comment 1220 are displayed as annotations.” Kasahara p. 22.
The claimed “instructional image” is mapped to an annotated image that includes fig. 31 1220 “CHECK,” an annotation, along with annotation target, a pan.
The claimed “visible area” corresponds to fig. 21 1200c, “the image 1200 viewed by the user of the wearable terminal 200.”
The “notification image,” mapped to the arrow 1230a and/or 1230b, makes “the user aware of existence” of (1220 + annotation target. The “notification image” is placed at a periphery of the transparent display corresponding to a direction in which the instructional image is located in the real space) as shown in fig. 31.).
Kasahara’s notification image (1230a or 1230b) does not explicitly teach a position of the periphery at which the notification image is disposedindicating a direction in which the instructional image is located in the real space.
Kasahara’s teaches that its notification image could be (Kasahara Fig. 28 1260a, 1260b, 1260c), which shows a position of the periphery at which the notification image is disposedindicating a direction in which the instructional image is located in the real space (
PNG
media_image8.png
727
683
media_image8.png
Greyscale
“FIG. 28 is a diagram illustrating a fourth example of displaying an annotation outside the viewable range according to an embodiment of the present disclosure. In the illustrated example, when an apple to be annotated (APPLE) is outside the image 1200, the end portion 1260 of the image 1200 closer to the apple shines. For example, in the image 1200a, since the apple is in the lower right direction of the screen, the lower right end portion 1260a shines. In the image 1200b, since the apple is in the upper left direction of the screen, the upper left end portion 1260b shines. In the image 1200c, since the apple is in the lower left direction of the screen, the lower left end portion 1260c shines.” Kasahara p. 21.
“In the above example, the region of the end portion 1260 can be set based on the direction in which the annotation target exists as viewed from the image 1200. Although the example in the oblique direction is shown in the figure, in another example, when the apple is in the left direction of the image 1200, the left end portion 1260 may shine. In this case, the end portion 1260 may be the entire left side of the image 1200.” Kasahara p. 21.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Kasahara’s periphery based notification image with Kasahara’s teaching based on Figs. 31-32. One of ordinary skill in the art would be motivated to reduce obstruction of a user’s view. “Thus, when notifying that an annotation exists outside the viewable range due to a change in the display of the end portion 1260, it is not necessary to display a separate direction using, for example, an arrow, which obstructs the display of the image 1200.” Kasahara p. 21.
After the combination of these embodiments, we could have:
PNG
media_image4.png
322
304
media_image4.png
Greyscale
).
Regarding “instructional image,” the Examiner is reading the limitation to mean: an image that provides instructions or explanations. The Examiner has mapped the virtual icon and message to request a user “Check” the pan on stove as part of the claimed “instructional image.” Although such a reminder/direction is one type of instructions, Applicant’s disclosure on “instructional image” appear to be images about teaching or explaining to a user on something. Kasahara does not explicitly and clearly disclose an “instructional image” that require teaching or explaining to a user about something.
Further, Kasahara does not explicitly disclose
transmit the real-time moving image to an external device configured to display the real-time moving image,
receive, from the external device based on a selected position within a still image of the real-time moving image, a command to display an instructional image on the transparent display at a position in the real space corresponding to the selected position in the still image,
Tucker teaches the “instructional image” that require teaching or providing explanation about a content (
PNG
media_image5.png
510
786
media_image5.png
Greyscale
“In other examples, overlaid, superimposed installation, trouble shooting, frequently asked questions (FAQ) information 1302 is overlaid, superimposed, etc. on the real world content 304 (see FIG. 13). In still further examples, a connection diagram 1402 (e.g., showing which cables are to be connected where) are overlaid, superimposed, etc. on the real world content 304 to assist, guide the user 108 in properly connecting the components C1, C2, G, etc. (see FIG. 14).” Tucker ¶ 50.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Tucker’s instructional images to teach or explain, with Kasahara. One of ordinary skill in the art would be motivated to more effectively teach or explain to a user by using augmented or mixed reality. Further, it could also make remote teaching/instruction more effective.
Kasahara in view of Tucker does not explicitly disclose
transmit the real-time moving image to an external device configured to display the real-time moving image,
receive, from the external device based on a selected position within a still image of the real-time moving image, a command to display an instructional image on the transparent display at a position in the real space corresponding to the selected position in the still image.
Jiang teaches
transmit the real-time moving image to an external device configured to display the real-time moving image (
PNG
media_image6.png
390
748
media_image6.png
Greyscale
“At the work site, the camera 21c of the operator 2 captures a camera image 2c presenting an environment of a work site, and the camera image 2c is transmitted from the operator terminal 20t to the instructor terminal 10t. The camera image 2c is displayed at the instructor terminal 10t.” Jiang ¶ 48.
“The operation supporting method in the first embodiment will be briefly described. In the first embodiment, integrated posture information 2e generated based on the camera image 2c, and the camera image 2c captured by the camera 21c are distributed from the operator terminal 201 to the remote support apparatus 101 at real time. From the remote support apparatus 101, instruction information 2f, which the instructor 1 inputs by pointing on the panorama image 4, is distributed to the operator terminal 201. Also, audio information 2v between the operator 2 and the instructor 1 is also interactively distributed at real time.” Jiang ¶ 74.
“The camera image 2c is captured by the camera 21c, and a stream of the multiple camera images 2c successively captured in time sequence is distributed as a video.” Jiang ¶ 76.
“With this camera, wearable terminal 200 can acquire an image in real space from a position close to the user's viewpoint. The acquired image is transmitted to the server 100.” Kasahara p. 3.),
receive, from the external device based on a selected position within a still image of the real-time moving image, a command to display an instructional image on the transparent display at a position in the real space corresponding to the selected position in the still image (
Fig. 1:
PNG
media_image7.png
196
282
media_image7.png
Greyscale
shows that the selected position that “HERE” points to within a still image (camera image 2c) for 1e Instruction detail added by the instructor. The HMD 21d is the transparent display.
Jang teaches an instructor’s command to display instructional image, mapped to “Instruction detail 1e,” stating “When the instructor 1 inputs an instruction detail 1e on the camera image 2c displayed at the instructor terminal 10t, instruction data 1d is sent to the operator terminal 20t. When the operator terminal 20t receives the instruction data 1d, an image generated by integrating the camera image 2c and the instruction detail 1e is displayed at the display device 21d.” Jiang ¶ 49.
Jang further explains, “When the circumstance of the work site, that is, the environment of the working place is shared between the operator 2 and the instructor 1, the instructor 1 indicates an operation target at the work site to solve the problem with respect to the camera image 2c displayed at the instructor terminal 10t (PHASE_2). In the PHASE_2, it is preferable to accurately point out the operation target in a location relationship with the operator 2.” Jiang ¶ 53.
Jang further explains, “FIG. 3 is a diagram for explaining an operation supporting method in a first embodiment. In a system 1001 illustrated in the first embodiment depicted in FIG. 3, a marker 7a is placed at a location to be the reference point at a working place 7. The marker 7a is used as a reference object representing the reference point, and includes information to specify a location and a posture of the operator 2 from the camera image 2c captured by the camera 21c. An AR marker or the like may be used, but is not limited to the AR marker. ” Jiang ¶ 70.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Jiang’s, with Kasahara in view Tucker. One of ordinary skill in the art would be motivated to provide remote instruction to an operator on site. This could improve work efficiency and safety.
Regarding Claim 2, Kasahara in view of Tucker and Jiang teaches The wearable terminal device according to claim 20,
wherein
the at least one processor is configured to display the instructional image on a display surface of the transparent display with the instructional image visible in the real space that is visible through the transparent display (“Alternatively, when the display is a transmissive type, the wearable terminal 200 may transparently superimpose and display the annotation on the real-world image that the user is directly viewing.” Kasahara p. 3.).
Regarding Claim 3, Kasahara in view of Tucker and Jiang teaches The wearable terminal device according to claim 20, further comprising:
wherein the at least one processor causes the transparent display to display an image of the real space captured by the camera and the instructional image superimposed on the image of the real space (
“When functioning as the device of (1) above, the wearable terminal 200 includes, for example, a camera installed in a frame portion of glasses as an imaging unit. With this camera, wearable terminal 200 can acquire an image in real space from a position close to the user's viewpoint. The acquired image is transmitted to the server 100. Moreover, when functioning as said (3) apparatus, the wearable terminal 200 has the display installed in the one part or all part of the lens part of spectacles, for example as a display means. The wearable terminal 200 displays an image captured by the camera on the display and superimposes the annotation input by the device (2) on the image.” Kasahara p. 3. “Alternatively, when the display is a transmissive type, the wearable terminal 200 may transparently superimpose and display the annotation on the real-world image that the user is directly viewing.” Kasahara p. 3.
The transparent display has been addressed in the independent claim.).
Regarding Claim 4, Kasahara in view of Tucker and Jiang teaches The wearable terminal device according to claim 20, further comprising:
a communication unit configured to communicate data with an external device used by a remote instructor (
With respect to “communication unit,” Kasahara states, “The image data and the spatial information are associated with each other and transmitted from the communication unit of wearable terminal 200 to server 100 (step S103).” Kasahara p. 7.
“In the tablet terminal 300, the communication unit receives the image data from the server 100, and the processor displays the image 1300 on the display 330 based on the received image data (step S109). Here, when the user's annotation input for the image 1300 is acquired by the touch sensor 340 (step S111), the processor relates the annotation input to a position in the image 1300 (for example, the position of the pointer 1310), and communicates from the communication unit to the server 100. (Step S113).” Kasahara pp. 7-8.
The claimed “external device” is mapped to the disclosed “tablet terminal 300.”),
wherein the at least one processor
is configured to generate the instructional image based on instructional data received by the communication unit from the external device, and
control the transparent display to display the instructional image (
“In the wearable terminal 200, the communication unit receives the annotation input and the real space position information from the server 100, and the processor displays the real space position associated with the annotation information on the current display 230 using the spatial information. The image is converted into a position in the image 1200 (step S119), and an annotation (for example, a pointer 1210 or a comment 1220) is displayed at the position (step S121).” Kasahara p. 8.
The claimed “instructional data” is mapped to the disclosed “the annotation input and the real space position information.”
“Alternatively, when the display is a transmissive type, the wearable terminal 200 may transparently superimpose and display the annotation on the real-world image that the user is directly viewing.” Kasahara p. 3.).
Regarding Claim 5, Kasahara in view of Tucker and Jiang teaches The wearable terminal device according to claim 4,
wherein the communication unit is configured to perform conversation communication with the external device (
[BRI on the record] With respect to “speech data,” the Examiner is reading the limitation to require voice data. In addition, transcribed speech is text, not speech data. This interpretation is in light of the specification:
[0057] . . . The communication unit 16 also communicates speech data to and from the external devices 20. In other words, the communication unit 16 transmits speech data collected by the microphone 17 to the external devices 20 and receives speech data transmitted from the external devices 20 in order to output speech from the speaker 18.
Spec. ¶ 57.
[Mapping Analysis]
“In other words, in the illustrated example, an interactive conversation relating to, for example, a work can be performed between the user who provides the image and the user serving as a teacher via the comment 1320. Also in this case, the comment 1320 is associated with the position in the real space, so that the comment can be accurately displayed at the position of the target component or the like. This image may be shared with another user.” Kasahara p. 25.), and
the at least one processor is configured to control the transparent display to display the instructional image during the conversation communication via the communication unit (Id.).
Kasahara does not explicitly disclose that the conversation is carried out by using speech data.
However, Tucker teaches that the conversation is carried out by using speech data (
“The HMD 1500 also includes a speaker 1504, which may be used to present audible instructions, which may include live instructions from a remote assistant or instructor, prerecorded instructions, or computer-generated speech instructions, along with other sounds. The HMD 1500 likewise includes an input 1508 to receive user input from the user, which may include user interactions with the AR environment, in some embodiments. Each of the display 1502, speaker 1504, or input 1508 may be integrated into the HMD 1500 or may be communicatively connected thereto.” Tucker ¶ 53.
The transparent display has been addressed in the independent claim.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Tucker’s voice communication, with Kasahara. One of ordinary skill in the art would be motivated to more effectively communicate with a user of HMD by adding one more means of communication: voice.
Regarding Claim 6, Kasahara in view of Tucker and Jiang teaches The wearable terminal device according to claim 4,
wherein during a condition in which the instructional image cannot be displayed based on the instructional data, the at least one processor is configured to control the transparent display to display a second notification making the user aware that the instructional image is not displayed on the transparent display (
Kasahara fig. 31:
PNG
media_image2.png
570
796
media_image2.png
Greyscale
fig. 28:
PNG
media_image4.png
322
304
media_image4.png
Greyscale
The claimed “first notification” is mapped to 1230a in the figure.
The claimed “second notification” is mapped to 1230b in the figure.
“Alternatively, when the display is a transmissive type, the wearable terminal 200 may transparently superimpose and display the annotation on the real-world image that the user is directly viewing.” Kasahara p. 3.
The transparent display has been addressed in the independent claim).
Regarding Claim 7, Kasahara in view of Tucker and Jiang teaches The wearable terminal device according to claim 20,
wherein the at least one processor is configured to control the transparent display to display the notification image being visually recognizable by the user (Kasahara fig. 31 1230a and 1230b; fig. 28 1260 a-c
The transparent display has been addressed in the independent claim).
Regarding Claim 8, Kasahara in view of Tucker and Jiang teaches The wearable terminal device according to claim 7, wherein the notification image is a prescribed notification display performed by the transparent display (Kasahara fig. 31 1230a and 1230b; fig. 28 1260 a-c “The wearable terminal 200 displays an image captured by the camera on the display and superimposes the annotation input by the device (2) on the image.” Kasahara p. 3.
The transparent display has been addressed in the independent claim).
Regarding Claim 12, Kasahara in view of Tucker and Jiang teaches The wearable terminal device according to claim 20.
Kasahara in view of Tucker teaches wherein the instructional image is a document image of a prescribed format (
[BRI on the record] With respect to “document image,” the Examiner is reading the limitation to mean: an image representation of a computer document file. This interpretation is in light of the specification:
[0080] The instructional images 31 may be document images in a prescribed file format. Document images serving as instructional images are displayed as window screens, for example, as illustrated by the virtual images 30 in FIG. 3. The document images may be instructions or a manual illustrating steps for the work to be performed. The file format of the document images may be a file format relating to image data, such as JPEG, PDF, or a file format of any other file generated by software.
Spec. ¶ 80.
[Mapping Analysis]
“The HMD passes the identifier to the gateway G (see FIG. 1), such as a set-top box via the connected component C1, C2, G, etc. (block 214). When the requested data is received (e.g., as JSON data parsed in HTML file(s)), and transferred (e.g., using a Generic Attribute Profile (GATT) of a low-energy Bluetooth transfer, and/or a bi-directional transport using web sockets) and parsed (block 216), the user 108 can view the requested data on the HMD 102 in, for example, an AR based presentation (block 218).” Tucker ¶ 38.
The claimed “document image” is mapped to the image representation of a “HTML file.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Tucker’s instructional images based on files, with Kasahara. One of ordinary skill in the art would be motivated to more effectively communicate with a user of HMD by adding one type information to share. Information in a file may be pre-prepared, formatted, and/or more detailed.
Regarding Claim 13, Kasahara in view of Tucker and Jiang teaches The wearable terminal device according to claim 20, wherein the instructional image is an image of a virtual object (
PNG
media_image9.png
283
339
media_image9.png
Greyscale
Kasahara fig. 32 1220., if it is mapped to the “instructional image.”).
Regarding Claim 14, Kasahara in view of Tucker and Jiang teaches The wearable terminal device according to claim 13,
wherein the virtual object is an object representing a path traced by pen input (
[BRI on the record] With respect to “pen input,” the Examiner is reading the limitation to mean: input by an object’s tip, e.g., pen or finger. This interpretation is in light of the specification: “The path traced by pen input may be identified from detection results of a path traced by the user's fingertip, or based on the path of movement of the tip of a prescribed pen input device held by the user or remote instructor.” Spec. 79.
“The user of the tablet terminal 300 inputs the annotation 1310 for the image 1300 using the touch sensor 340 (operation unit) provided on the display 330. In the illustrated example, the annotation 1310 is a graffiti drawn on the screen (SCREEN '). The annotation 1310 is associated with a position on the screen (SCREEN) in the real space based on, for example, spatial information that the tablet terminal 300 acquires together with the captured image.” Kasahara p. 15.).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kasahara in view of Tucker and Jiang as applied to Claim 20, in further view of Park et al. (US 20170115728 A1).
Regarding Claim 10, Kasahara in view of Tucker and Jiang teaches The wearable terminal device according to claim 20.
Kasahara in view of Tucker and Jiang does not teach wherein the notification image includes an output of prescribed sound.
Park teaches wherein the notification image includes an output of prescribed sound (
“Also, in order to guide a position where the event information is formed in the visual space, the controller 180 may output a notification sound to any one of a left audio output unit and a right audio output unit provided in the HMD 200 (or any one of a left audio output unit and a right audio output unit formed in a headset or a speaker).” Park ¶ 280.
“For example, in a case in which the event information is formed on the right with respect to a currently viewed region, the controller 180 may output the notification sound only to the right audio output unit. In a case in which the event information is formed on the left with respect to the currently viewed region, the controller 180 may output the notification sound only to the left audio output unit.” Park ¶ 281.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Parker’s audio notification, with Kasahara in view of Tucker and Jiang. One of ordinary skill in the art would be motivated to discreetly notify a user without blocking or interfering a user’s view with visual notifications. If the sound notification is given in addition to the visual notification, it would enhance the effectiveness of notification. Some users may also be more alert to audio notifications.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Kasahara in view of Tucker and Jiang as applied to Claim 20, in further view of Gan et al. (US 20190189159 A1).
Regarding Claim 11, Kasahara in view of Tucker and Jiang teaches The wearable terminal device according to claim 20.
Kasahara suggests wherein the at least one processor is configured to identify a work object to be worked on by the user, and determine a display position of the instructional image within a range excluding a range where the instructional image would visually obstruct the work object (
Kasahara fig. 32:
PNG
media_image9.png
283
339
media_image9.png
Greyscale
).
Kasahara in view of Tucker, and Jiang does not explicitly disclose wherein the at least one processor is configured to identify a work object to be worked on by the user, and determine a display position of the instructional image within a range excluding a range where the instructional image would visually obstruct the work object.
Gan teaches wherein the at least one processor is configured to identify a work object to be worked on by the user, and determine a display position of the instructional image within a range excluding a range where the instructional image would visually obstruct the work object (
“. . . access a video, identify the objects of interest associated with the incident type within the video, and place the annotation within the video such that the annotation does not block an object of interest associated with the incident type.” Gan ¶ 22. ).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Gan’s strategy to place labels, with Kasahara in view of Tucker and Jiang. One of ordinary skill in the art would be motivated not to distract user’s attention from object of interest.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Kasahara in view of Tucker and Jiang as applied to Claim 1, in further view of WALDMAN et al. (US 20140189515 A1) and Yamamoto et al. (US 20070150273 A1).
Regarding Claim 18, Kasahara in view of Tucker and Jiang teaches The wearable terminal device according to claim 20.
Kasahara in view of Tucker and Jiang does not explicitly disclose wherein the at least one processor is configured to control the transparent display to display the notification image in a first manner in response to determining that the instructional image not been displayed on the transparent display before and in a second manner in response to determining the instructional image has been displayed on the transparent display before.
WALDMAN teaches
wherein the at least one processor is configured to control the transparent display to display the notification image in a first manner in response to determining that the instructional image not been viewed on the transparent display before and in a second manner in response to determining the instructional image has been viewed on the transparent display before (
“For example, a notification marker/icon may be a certain color when new notification events are present, and a different color when all notification events have been viewed.” WALDMAN ¶ 74.
After Kasahara in view of Tucker and Jiang is combined with WALDMAN, if the instruction images/video/AR content as shown in Tucker has been viewed before, the notification/arrow would be shown in a different color, indicating the instruction information has already been reviewed.
The first manner and second manner are based on the color of the notification image. ).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine WALDMAN’s coloring strategy with Kasahara in Tucker and Jiang. One of ordinary skill in the art would be motivated to provide information to a user, so that an informed decision may be made. If the instructional information has already been viewed, the user may decide not to view it again.
Kasahara in view of Tucker, Jiang, and WALDMAN does not explicitly disclose that displayed content is assumed to be viewed content
Yamamoto teaches that displayed content is assumed to be viewed content (“. . . a given program is assumed to be ‘viewed’ when it is displayed on the TV screen.” Yamamoto ¶ 37. ).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Yamamoto with Kasahara in view of Tucker, Jiang, and WALDMAN. One of ordinary skill in the art would be motivated to conveniently estimate the content that has been viewed.
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Kasahara in view of Tucker and Jiang as applied to Claim 20, in further view of WALDMAN and OSHIBA et al. (US 20170005969 A1).
Regarding Claim 19, Kasahara in view of Tucker and Jiang teaches The wearable terminal device according to claim 20.
Kasahara in view of Tucker and Jiang does not explicitly disclose
wherein the at least one processor is configured to control the transparent display to display the notification image in a first manner in response to determining that a time elapsed since the command was received is less than or equal to a prescribed reference time and in a second manner in response to determining that the time elapsed since the command was received is greater than the prescribed reference time.
WALDMAN wherein the at least one processor is configured to control the transparent display to display the notification image in a first manner in response to determining that a first condition is satisfied a second condition is satisfied
“For example, a notification marker/icon may be a certain color when new notification events are present, and a different color when all notification events have been viewed.” WALDMAN ¶ 74.
After Kasahara in view of Tucker and Jiang is combined with WALDMAN, if the instruction images/video/AR content as shown in Tucker has been viewed before, the notification/arrow would be shown in a different color, indicating the instruction information has already been reviewed.
The first manner and second manner are based on the color of the notification image. ).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine WALDMAN’s coloring strategy with Kasahara in Tucker and Jiang. One of ordinary skill in the art would be motivated to provide information to a user, so that an informed decision may be made. If the instructional information has already been viewed, the user may decide not to view it again.
Kasahara in Tucker, Jiang, and Jiang does not explicitly disclose
the first condition is a time elapsed since the command was received is less than or equal to a prescribed reference time, the second condition is the time elapsed since the command was received is greater than the prescribed reference time to control the displaying manner.
Oshiba teaches the first condition is a time elapsed since the command was received is less than or equal to a prescribed reference time, the second condition is the time elapsed since the command was received is greater than the prescribed reference time to control the displaying manner (
“. . . a message display control device according to one embodiment of the present invention relates to a message display control device comprising at least one processor configured to: display at least one of a plurality of message objects on a display; acquire a determination result about whether or not a reference display time period has elapsed for each of the at least one of the plurality of message objects; and display a new message object on the display, on which the at least one of the plurality of message objects are displayed, based on the determination result.” Oshiba ¶ 8.
Oshiba teaches, “Further, in contrast to the above-mentioned configuration, the display control unit 72 may be configured to inhibit the new message object 34C from being displayed translucently at a time point of the start of the display, and to gradually raise the transparency of the new message object 34C with the elapse of time. That is, the display control unit 72 inhibits the new message object 34C from becoming translucent immediately after the start of the display, and changes the transparency of the new message object 34C so that the message object 34C becomes more translucent with the elapse of time. With this configuration, the new message object 34C is in a relatively visible state immediately after the start of the display, and causes the new message object 34C to become more translucent with the elapse of time, to thereby be able to allow the message object 34A to be confirmed as well.” Oshiba ¶ 119.
the prescribed reference time is mapped to disclosed “reference display time period”.
Here, when the message is less that “reference display time period”, the message is considered new enough.
After Kasahara in view of Tucker, Jiang, and WALDMAN is combined with Oshiba,, its new instructional image may change color and disappear after a reference duration of time.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Oshiba’s timed displaying treatment of information with Kasahara in view of Tucker, Jiang and WALDMAN. One of ordinary skill in the art would be motivated to deemphasize and/or disappear older information. New information is sometimes more updated and relevant to the current situation.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
FUKAZAWA et al. (US 20190066630 A1) states, “. . . because the information is required to be reliably delivered to the user, the information converting unit 122 converts the notification information into an explicit display style such as push notification (dialog, annotation) to inside the field of view of the user.” ¶ 44. Similar to instant application’s pushing notifications to a user who is viewing a mixed reality.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHENGXI LIU whose telephone number is (571)270-7509. The examiner can normally be reached M-F 9 AM - 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZHENGXI LIU/Primary Examiner, Art Unit 2611