DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 18, 2026 has been entered.
Response to Amendment
The amendment filed on December 29, 2025 has been entered.
In view of the amendment to the claims, the amendment of claims 1 and 19-23 have been acknowledged.
Response to Arguments
Applicant’s arguments, see pages 9-22 of Remarks, filed December 29, 2025 have been fully considered and are persuasive. Applicant’s arguments are directed to the amended limitations recited in independent claim 1 and addressed in the claim rejections below.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 21-22 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 21 recites “an arrangement of subject in the virtual space is different than the arrangement of the subject in the virtual space in the field of view of the user at a time of the capturing of the image or moving image”; claim 22 recites “a position of the subject in the virtual space is different than the position of the subject in the virtual space in the field of view of the user at a time of the capturing of the image or moving image”.
In view of specification of the present application, the specification describes “an arrangement of subject in the virtual space” and “a position of subject in the virtual space”. As shown in FIG. 2 and paragraphs [0031]-[0035] of the specification, an arrangement/a position of subject 410 in the virtual space. FIG, 2 shows the subject in the virtual space in the field of view of the user at a time of the capturing of the image or moving image. FIG. 3 shows a captured image 510 including the subject 410 in the virtual space. Thus, FIGS. 2-3 show an arrangement or a position of the captured image of the subject in the virtual space (As shown in FIG. 3) is different than the position of the subject in the virtual space in the field of view of the user at a time of the capturing of the image or moving image (As shown in FIG. 2).
However, the specification does not “an arrangement of subject in the virtual space is different than the arrangement of the subject in the virtual space in the field of view of the user at a time of the capturing of the image or moving image” and “a position of the subject in the virtual space is different than the position of the subject in the virtual space in the field of view of the user at a time of the capturing of the image or moving image” recited in claims 21 and 22. Accordingly, the specification does not enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make the invention commensurate in scope with limitations recited in the claims. Therefore, the examiner has made the prior art rejections below based on the examiner's best understanding of the claims.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 21-22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 21 recites “an arrangement of subject in the virtual space is different than the arrangement of the subject in the virtual space in the field of view of the user at a time of the capturing of the image or moving image”; claim 22 recites “a position of the subject in the virtual space is different than the position of the subject in the virtual space in the field of view of the user at a time of the capturing of the image or moving image”.
As discussed above, the specification does not describe an arrangement of subject in the virtual space is different than the arrangement of the subject in the virtual space in the field of view of the user at a time of the capturing of the image or moving image” and “a position of the subject in the virtual space is different than the position of the subject in the virtual space in the field of view of the user at a time of the capturing of the image or moving image”. recited in claims 21 and 22. The issue is persons of ordinary skill in the art reading the specification is not able to understand what Applicant regards as the invention. Therefore, the claims are rejected under U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-8, 14 and 18-23 are rejected under 35 U.S.C. 103 as being unpatentable over KAJIWARA (U.S. Patent Application Publication 2019/0306419 A1) in view of SAWAKI et al (U.S. Patent Application Publication 2019/0026950 A1).
Regarding claim 1, KAJIWARA discloses an information processing apparatus comprising:
circuitry (FIG. 7; paragraph [0099], a part or all of the image acquisition unit 110, the image processing unit 111, the additional information generation unit 112, the UI unit 113, the communication unit 114, and the output unit 115 can be configured as one or more hardware circuits that operate in cooperation with each other) configured to:
control to capture an image or a moving image of a subject (FIGS. 16A and 16B; paragraph [0175], FIG. 21 shows input screen 600 switched to the region designation screen includes, for example, a cut image display field 6020, a check box 6021, and a selection button 6022 ...; paragraph [0177], FIG. 22 is an example of the screen 500 being displayed when designating a cut region in accordance with an operation on the selection button 6022 according to the first embodiment. As illustrated in FIG. 22, in response to the operation on the selection button 6022, the UI unit 113 displays a frame 603 indicating the cut region on the screen 500, and deletes the display of the annotation input screen 600 from the screen 500. For example, the frame 603 is displayed at the region, which is pre-set for the position designated in step S103; paragraph [0181], In step S133, the UI unit 113 acquires an image of the region designated by the frame 603 from the partial image 5010 as a cut image) in a field of view of a user in a virtual space (Paragraph [0149], FIG. 18 is an example of the screen 500 displayed using the display device 1010 under the control of the UI unit 113 according to the first embodiment. In this example case, the UI unit 113 displays the partial image 5010 of the partial region 511 illustrated in FIG. 17 on the entire area of the screen 500); and
control to display, at a predetermined area in a display (Paragraph [0175], the annotation input screen 600 on the screen 500), the captured image or the captured moving image in the virtual space (Paragraph [0182], in step S134, the UI unit 113 displays the cut image acquired in step S132 on the cut image display field 6020 in the annotation input screen 600), wherein the captured image or the captured moving image is based on less than an entirety of the field of view of the user viewing the virtual space (Paragraph [0149], FIG. 18 is an example of the screen 500 displayed using the display device 1010 under the control of the UI unit 113 according to the first embodiment. In this example case, the UI unit 113 displays the partial image 5010 of the partial region 511 illustrated in FIG. 17 on the entire area of the screen 500; paragraph [0157], in FIG. 20, the annotation input screen 600 includes tabs 601a, 601b, and 601c for switching functions of the annotation input screen 600 ... The tab 601c is a tab for selecting a cut region, and in accordance with an operation of the tab 601c, the display of the screen 500 is switched to a display for designating a region on the partial image 5010 ...; paragraphs [0173]-[0174], if the tab 601c is designated in step S104, the UI unit 113 determines to perform a selection of cut region for the partial image 5010 in step S130, and proceeds the sequence to step S131 ... In step S131, the UI unit 113 switches the display of the annotation input screen 600 to a display of the region designation screen used for designating the cut region as a diagnosis region including a diagnosis target image. FIG. 21 is an example of the annotation input screen 600 switched to the region designation screen according to the first embodiment; paragraph [0175], In FIG. 21, the annotation input screen 600 switched to the region designation screen includes, for example, a cut image display field 6020 ... The cut image display field 6020 displays an image cut from a region designated for the partial image 5010. Thus, the user views a virtual space as shown in FIG. 20; select a subject using a marker 602 and designate a cut region of a structure within the view of the virtual space; displays an image cut from a region designated for the virtual space) and the predetermined area is less than an entirety of an area of the display (Paragraph [0175], FIG. 21 shows the area of the annotation input screen 600 is less than an entirety of an area of the display 500).
However, KAJIWARA does not specifically disclose wherein the circuitry is further configured to maintain the captured image or the captured moving image being displayed at the predetermined area in the display in response to the field of view of the user changing.
In additional, SAWAKI discloses (Abstract, a method of providing a virtual space according to at least one embodiment of this disclosure includes defining a virtual space ... The method further includes determining a field of view of the first user based on a first virtual camera arranged in the virtual space. The method further includes displaying an image corresponding to the field of view of the first user on the first head-mounted device ...; FIGS. 2 and 12; paragraph [0164], ... In FIG. 12A, the virtual space 11A ... An avatar object 6A of the user 5A and an avatar object 6B of the user 5B are present in the virtual space 11A ...; paragraphs [0194]-[0195], with reference to FIG. 15, the computer 200 provides the virtual space 11 to the HMD (head-mounted device) 120 worn by the user 5 ... The computer 200 arranges the avatar object 6 corresponding to the user 5 in the virtual space 11 ... The computer 200 further displays on the monitor of the HMD 120 an image corresponding to the field-of-view region of the avatar object 6 ...) wherein the circuitry is further configured to maintain the captured image or the captured moving image being displayed at the predetermined area in the display (Paragraphs [0196]-[0197], the avatar object 6 moves in accordance with the operation of the user 5. The user 5 operates the camera object 1541 with the avatar object 6 to photograph the virtual space 11 ... In the example of FIG. 15, a photography range 1542 of the camera object 1541 includes a flower 1543, which is a portion of the panorama image 13 ...; FIG. 16; paragraphs [0200]-[0201], under the state of FIG. 16, the user 5 is visually recognizing a field-of-view image 1617 developed on the monitor of the HMD 120 ... The camera object 1541 has a preview screen. In the example of FIG. 16, the preview screen includes a flower 1543. The computer 200 executes photography in the virtual space 11 when a press of a button 1647 arranged on the camera object 1541 by the hand object 1644 is received. As a result, the computer 200 generates a photograph image ... paragraph [0202], the computer 200 generates a photograph object 1545 representing the photograph image, and arranges the object near the camera object 1541. The computer 200 also moves the photograph object 1545 to the position of the table object 1546) in response to the field of view of the user changing (Paragraphs [0198]-[0199], the computer 200 arranges the photograph object 1545 on a table object 1546 ... The user 5 moves to the table object 1546 in the virtual space 11, and confirms the generated photograph image ...; paragraph [0268], FIG. 25 is a diagram of processing of notifying the user 5A that the photograph object 1545 is arranged at the accumulation place according to at least one embodiment of this disclosure. A field-of-view image 2517 visually recognized by the user 5A includes a table object 1546, a plurality of photograph objects 1545; FIG. 15 shows that the virtual space 11; the flower 1543 of portion of the panorama image 13 and a table object 1546 are arranged at different viewpoint; paragraph [0268], FIG. 25 shows field-of-view image corresponding to the table object 1546).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of inputting a diagnosis result of a diagnosis target detectable for a structure taught by KAWAMURA incorporate the teachings of SAWAKI, and applying the method for displaying and image corresponding to the field of view of the user taught by SAWAKI to generate a virtual image based on the first viewpoint of the user and display the generated image in the virtual space the second viewpoint of the user different the first virtual viewpoint. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify KAWAMURA according to the relied-upon teachings of SAWAKI to obtain the invention as specified in claim.
Regarding claim 2, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 1), and KAJIWARA further disclose wherein the circuitry is further configured to arrange the displayed image or the displayed moving image in the virtual space (FIGS. 16A and 16B; paragraph [0175], FIG. 21 shows input screen 600 switched to the region designation screen includes, for example, a cut image display field 6020, a check box 6021, and a selection button 6022 ...; paragraph [0177], FIG. 22 is an example of the screen 500 being displayed when designating a cut region in accordance with an operation on the selection button 6022 according to the first embodiment. As illustrated in FIG. 22, in response to the operation on the selection button 6022, the UI unit 113 displays a frame 603 indicating the cut region on the screen 500, and deletes the display of the annotation input screen 600 from the screen 500. For example, the frame 603 is displayed at the region, which is pre-set for the position designated in step S103; paragraph [0181], In step S133, the UI unit 113 acquires an image of the region designated by the frame 603 from the partial image 5010 as a cut image ...; paragraph [0182], in step S134, the UI unit 113 displays the cut image acquired in step S132 on the cut image display field 6020 in the annotation input screen 600).
Regarding claim 3, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 2), and KAJIWARA further disclose wherein the circuitry is further configured to arrange the displayed image or the captured moving image at a place in the virtual space set in with respect to a position of the user viewing the virtual space (Paragraph [0175], in FIG. 21, the annotation input screen 600 switched to the region designation screen includes, for example, a cut image display field 6020, a check box 6021, and a selection button 6022. The cut image display field 6020 displays an image cut from a region designated for the partial image 5010).
Regarding claim 6, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 1), and KAJIWARA further disclose wherein the circuitry is further configured to image a part of the field of view of the user viewing the virtual space (Paragraph [0177], FIG. 22 is an example of the screen 500 being displayed when designating a cut region in accordance with an operation on the selection button 6022 according to the first embodiment. As illustrated in FIG. 22, in response to the operation on the selection button 6022, the UI unit 113 displays a frame 603 indicating the cut region on the screen 500 ...).
Regarding claim 7, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 6), and KAJIWARA further disclose wherein the field of view of the user viewing the virtual space includes a display range of a display unit in which the virtual space is displayed (Paragraph [0177], FIG. 22 is an example of the screen 500 being displayed when designating a cut region in accordance with an operation on the selection button 6022 according to the first embodiment ...), and
the circuitry is further configured to acquire a part of the display range of the display as the imaged subject (Paragraph [0177], as illustrated in FIG. 22, in response to the operation on the selection button 6022, the UI unit 113 displays a frame 603 indicating the cut region on the screen 500, and deletes the display of the annotation input screen 600 from the screen 500. For example, the frame 603 is displayed at the region, which is pre-set for the position designated in step S103 (As shown in FIG. 16)).
Regarding claim 8, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 1), and KAJIWARA further disclose wherein when a trigger is detected, the circuitry is further configured to continuously acquire a plurality of the images or the moving images and determine whether or not each of the acquired images or the acquired moving images has been successfully captured (FIGS. 7 and 16; paragraph [0177], as illustrated in FIG. 22, in response to the operation on the selection button 6022, the UI unit 113 displays a frame 603 indicating the cut region on the screen 500, and deletes the display of the annotation input screen 600 from the screen 500. For example, the frame 603 is displayed at the region, which is pre-set for the position designated in step S103; paragraph [0178]-[0181], the UI unit 113 can change the size, shape and position of the frame 603 in accordance with a user operation ... if the UI unit 113 determines that the region designation has completed (step S132: YES), the UI unit 113 proceeds the sequence to step S133 ... In step S133, the UI unit 113 acquires an image of the region designated by the frame 603 from the partial image 5010 as a cut image. In step S133, the UI unit 113 acquires, for example, coordinates of each vertex of the frame 603 in the full view spherical image 510 including the partial image 5010, and stores the acquired coordinates in, for example, the RAM 1002 in association with the position information indicating the position designated in step S103).
Regarding claim 14, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 1), and KAJIWARA further disclose wherein the circuitry is further configured to rearrange the displayed image or moving image at an arbitrary position and posture in the virtual space according to a user operation of the user (FIGS. 7 and 16; paragraph [0177], as illustrated in FIG. 22, in response to the operation on the selection button 6022, the UI unit 113 displays a frame 603 indicating the cut region on the screen 500, and deletes the display of the annotation input screen 600 from the screen 500. For example, the frame 603 is displayed at the region, which is pre-set for the position designated in step S103; paragraph [0178]-[0182], the UI unit 113 can change the size, shape and position of the frame 603 in accordance with a user operation ... if the UI unit 113 determines that the region designation has completed (step S132: YES), the UI unit 113 proceeds the sequence to step S133 ... In step S133, the UI unit 113 acquires an image of the region designated by the frame 603 from the partial image 5010 as a cut image ... In step S134, the UI unit 113 displays the cut image acquired in step S132 on the cut image display field 6020 in the annotation input screen 600).
Regarding claim 18, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 1), and KAJIWARA further disclose wherein the circuitry is further configured to perform display indicating a range of the imaging on a display in which the virtual space is displayed (Paragraph [0177], FIG. 22 is an example of the screen 500 being displayed when designating a cut region in accordance with an operation on the selection button 6022 according to the first embodiment. As illustrated in FIG. 22, in response to the operation on the selection button 6022, the UI unit 113 displays a frame 603 indicating the cut region on the screen 500, and deletes the display of the annotation input screen 600 from the screen 500. For example, the frame 603 is displayed at the region, which is pre-set for the position designated in step S103 (As shown in FIG. 16)).
Regarding claim 19, KAJIWARA discloses an information processing method comprising:
controlling (FIG. 7; paragraph [0099], a part or all of the image acquisition unit 110, the image processing unit 111, the additional information generation unit 112, the UI unit 113, the communication unit 114, and the output unit 115 can be configured as one or more hardware circuits that operate in cooperation with each other) to capture an image or a moving image of a subject (FIGS. 16A and 16B; paragraph [0175], FIG. 21 shows input screen 600 switched to the region designation screen includes, for example, a cut image display field 6020, a check box 6021, and a selection button 6022 ...; paragraph [0177], FIG. 22 is an example of the screen 500 being displayed when designating a cut region in accordance with an operation on the selection button 6022 according to the first embodiment. As illustrated in FIG. 22, in response to the operation on the selection button 6022, the UI unit 113 displays a frame 603 indicating the cut region on the screen 500, and deletes the display of the annotation input screen 600 from the screen 500. For example, the frame 603 is displayed at the region, which is pre-set for the position designated in step S103; paragraph [0181], In step S133, the UI unit 113 acquires an image of the region designated by the frame 603 from the partial image 5010 as a cut image) in a field of view of a user in a virtual space (Paragraph [0149], FIG. 18 is an example of the screen 500 displayed using the display device 1010 under the control of the UI unit 113 according to the first embodiment. In this example case, the UI unit 113 displays the partial image 5010 of the partial region 511 illustrated in FIG. 17 on the entire area of the screen 500); and
controlling to display, at a predetermined area in a display (Paragraph [0175], the annotation input screen 600 on the screen 500), the captured image or the captured moving image in the virtual space (Paragraph [0182], in step S134, the UI unit 113 displays the cut image acquired in step S132 on the cut image display field 6020 in the annotation input screen 600), wherein the captured image or the captured moving image is based on less than an entirety of the field of view of the user viewing the virtual space (Paragraph [0149], FIG. 18 is an example of the screen 500 displayed using the display device 1010 under the control of the UI unit 113 according to the first embodiment. In this example case, the UI unit 113 displays the partial image 5010 of the partial region 511 illustrated in FIG. 17 on the entire area of the screen 500; paragraph [0157], in FIG. 20, the annotation input screen 600 includes tabs 601a, 601b, and 601c for switching functions of the annotation input screen 600 ... The tab 601c is a tab for selecting a cut region, and in accordance with an operation of the tab 601c, the display of the screen 500 is switched to a display for designating a region on the partial image 5010 ...; paragraphs [0173]-[0174], if the tab 601c is designated in step S104, the UI unit 113 determines to perform a selection of cut region for the partial image 5010 in step S130, and proceeds the sequence to step S131 ... In step S131, the UI unit 113 switches the display of the annotation input screen 600 to a display of the region designation screen used for designating the cut region as a diagnosis region including a diagnosis target image. FIG. 21 is an example of the annotation input screen 600 switched to the region designation screen according to the first embodiment; paragraph [0175], In FIG. 21, the annotation input screen 600 switched to the region designation screen includes, for example, a cut image display field 6020 ... The cut image display field 6020 displays an image cut from a region designated for the partial image 5010. Thus, the user views a virtual space as shown in FIG. 20; select a subject using a marker 602 and designate a cut region of a structure within the view of the virtual space; displays an image cut from a region designated for the virtual space) and the predetermined area is less than an entirety of an area of the display (Paragraph [0175], FIG. 21 shows the area of the annotation input screen 600 is less than an entirety of an area of the display 500).
However, KAJIWARA does not specifically disclose wherein the displaying of the captured image or the captured moving image includes maintaining displaying of the captured image or the captured moving image at the predetermined area in the display in response to the field of view of the user changing.
In additional, SAWAKI discloses (Abstract, a method of providing a virtual space according to at least one embodiment of this disclosure includes defining a virtual space ... The method further includes determining a field of view of the first user based on a first virtual camera arranged in the virtual space. The method further includes displaying an image corresponding to the field of view of the first user on the first head-mounted device ...; FIGS. 2 and 12; paragraph [0164], ... In FIG. 12A, the virtual space 11A ... An avatar object 6A of the user 5A and an avatar object 6B of the user 5B are present in the virtual space 11A ...; paragraphs [0194]-[0195], with reference to FIG. 15, the computer 200 provides the virtual space 11 to the HMD (head-mounted device) 120 worn by the user 5 ... The computer 200 arranges the avatar object 6 corresponding to the user 5 in the virtual space 11 ... The computer 200 further displays on the monitor of the HMD 120 an image corresponding to the field-of-view region of the avatar object 6 ...) wherein the displaying of the captured image or the captured moving image includes maintaining displaying of the captured image or the captured moving image at the predetermined area in the display (Paragraphs [0196]-[0197], the avatar object 6 moves in accordance with the operation of the user 5. The user 5 operates the camera object 1541 with the avatar object 6 to photograph the virtual space 11 ... In the example of FIG. 15, a photography range 1542 of the camera object 1541 includes a flower 1543, which is a portion of the panorama image 13 ...; FIG. 16; paragraphs [0200]-[0201], under the state of FIG. 16, the user 5 is visually recognizing a field-of-view image 1617 developed on the monitor of the HMD 120 ... The camera object 1541 has a preview screen. In the example of FIG. 16, the preview screen includes a flower 1543. The computer 200 executes photography in the virtual space 11 when a press of a button 1647 arranged on the camera object 1541 by the hand object 1644 is received. As a result, the computer 200 generates a photograph image ... paragraph [0202], the computer 200 generates a photograph object 1545 representing the photograph image, and arranges the object near the camera object 1541. The computer 200 also moves the photograph object 1545 to the position of the table object 1546) in response to the field of view of the user changing (Paragraphs [0198]-[0199], the computer 200 arranges the photograph object 1545 on a table object 1546 ... The user 5 moves to the table object 1546 in the virtual space 11, and confirms the generated photograph image ...; paragraph [0268], FIG. 25 is a diagram of processing of notifying the user 5A that the photograph object 1545 is arranged at the accumulation place according to at least one embodiment of this disclosure. A field-of-view image 2517 visually recognized by the user 5A includes a table object 1546, a plurality of photograph objects 1545; FIG. 15 shows that the virtual space 11; the flower 1543 of portion of the panorama image 13 and a table object 1546 are arranged at different viewpoint; paragraph [0268], FIG. 25 shows field-of-view image corresponding to the table object 1546).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of inputting a diagnosis result of a diagnosis target detectable for a structure taught by KAWAMURA incorporate the teachings of SAWAKI, and applying the method for displaying and image corresponding to the field of view of the user taught by SAWAKI to generate a virtual image based on the first viewpoint of the user and display the generated image in the virtual space the second viewpoint of the user different the first virtual viewpoint. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify KAWAMURA according to the relied-upon teachings of SAWAKI to obtain the invention as specified in claim.
Regarding claim 20, KAJIWARA discloses an information processing system comprising:
a display device (Paragraph [0093], FIG. 6 is an example of a hardware block diagram of an information processing apparatus 100a, which can be used as an input apparatus for inputting an annotation according to the first embodiment. As illustrated in FIG. 6, the information processing apparatus 100a includes, for example, a CPU 1000, a ROM 1001, a RAM 1002, a graphic I/F 1003, a storage 1004, a data I/F 1005, a communication I/F 1006, an input device 1011, which are connected with each other via a bus 1030, and further a display device 1010 connected to the graphic I/F 1003; paragraph [0098], FIG. 7 is an example of a functional block diagram of the information processing apparatus 100a according to the first embodiment. As illustrated in FIG. 7, the information processing apparatus 100a includes, for example, an image acquisition unit 110, an image processing unit 111, an additional information generation unit 112, a user interface (UI) unit 113, a communication unit 114, and an output unit 115; paragraph [0100], the UI unit 113 is also a display unit that generates a screen image to be displayed on the display device 1010);
a controller (Paragraph [0093], a CPU 1000); and
an information processing apparatus including circuitry (Paragraph [0099], a part or all of the image acquisition unit 110, the image processing unit 111, the additional information generation unit 112, the UI unit 113, the communication unit 114, and the output unit 115 can be configured as one or more hardware circuits that operate in cooperation with each other) configured to:
control to capture an image or a moving image of a subject (FIGS. 16A and 16B; paragraph [0175], FIG. 21 shows input screen 600 switched to the region designation screen includes, for example, a cut image display field 6020, a check box 6021, and a selection button 6022 ...; paragraph [0177], FIG. 22 is an example of the screen 500 being displayed when designating a cut region in accordance with an operation on the selection button 6022 according to the first embodiment. As illustrated in FIG. 22, in response to the operation on the selection button 6022, the UI unit 113 displays a frame 603 indicating the cut region on the screen 500, and deletes the display of the annotation input screen 600 from the screen 500. For example, the frame 603 is displayed at the region, which is pre-set for the position designated in step S103; paragraph [0181], In step S133, the UI unit 113 acquires an image of the region designated by the frame 603 from the partial image 5010 as a cut image) in a field of view of a user in a virtual space (Paragraph [0149], FIG. 18 is an example of the screen 500 displayed using the display device 1010 under the control of the UI unit 113 according to the first embodiment. In this example case, the UI unit 113 displays the partial image 5010 of the partial region 511 illustrated in FIG. 17 on the entire area of the screen 500); and
control to display, at a predetermined area in a display (Paragraph [0175], the annotation input screen 600 on the screen 500), the captured image or the captured moving image in the virtual space (Paragraph [0182], in step S134, the UI unit 113 displays the cut image acquired in step S132 on the cut image display field 6020 in the annotation input screen 600), wherein the captured image or the captured moving image is based on less than an entirety of the field of view of the user viewing the virtual space (Paragraph [0149], FIG. 18 is an example of the screen 500 displayed using the display device 1010 under the control of the UI unit 113 according to the first embodiment. In this example case, the UI unit 113 displays the partial image 5010 of the partial region 511 illustrated in FIG. 17 on the entire area of the screen 500; paragraph [0157], in FIG. 20, the annotation input screen 600 includes tabs 601a, 601b, and 601c for switching functions of the annotation input screen 600 ... The tab 601c is a tab for selecting a cut region, and in accordance with an operation of the tab 601c, the display of the screen 500 is switched to a display for designating a region on the partial image 5010 ...; paragraphs [0173]-[0174], if the tab 601c is designated in step S104, the UI unit 113 determines to perform a selection of cut region for the partial image 5010 in step S130, and proceeds the sequence to step S131 ... In step S131, the UI unit 113 switches the display of the annotation input screen 600 to a display of the region designation screen used for designating the cut region as a diagnosis region including a diagnosis target image. FIG. 21 is an example of the annotation input screen 600 switched to the region designation screen according to the first embodiment; paragraph [0175], In FIG. 21, the annotation input screen 600 switched to the region designation screen includes, for example, a cut image display field 6020 ... The cut image display field 6020 displays an image cut from a region designated for the partial image 5010. Thus, the user views a virtual space as shown in FIG. 20; select a subject using a marker 602 and designate a cut region of a structure within the view of the virtual space; displays an image cut from a region designated for the virtual space) and the predetermined area is less than an entirety of an area of the display (Paragraph [0175], FIG. 21 shows the area of the annotation input screen 600 is less than an entirety of an area of the display 500)
However, KAJIWARA does not specifically disclose wherein the displaying of the captured image or the captured moving image includes maintaining displaying of the captured image or the captured moving image at the predetermined area in the display in response to the field of view of the user changing.
In additional, SAWAKI discloses (Abstract, a method of providing a virtual space according to at least one embodiment of this disclosure includes defining a virtual space ... The method further includes determining a field of view of the first user based on a first virtual camera arranged in the virtual space. The method further includes displaying an image corresponding to the field of view of the first user on the first head-mounted device ...; FIGS. 2 and 12; paragraph [0164], ... In FIG. 12A, the virtual space 11A ... An avatar object 6A of the user 5A and an avatar object 6B of the user 5B are present in the virtual space 11A ...; paragraphs [0194]-[0195], with reference to FIG. 15, the computer 200 provides the virtual space 11 to the HMD (head-mounted device) 120 worn by the user 5 ... The computer 200 arranges the avatar object 6 corresponding to the user 5 in the virtual space 11 ... The computer 200 further displays on the monitor of the HMD 120 an image corresponding to the field-of-view region of the avatar object 6 ...) wherein the displaying of the captured image or the captured moving image includes maintaining displaying of the captured image or the captured moving image at the predetermined area in the display (Paragraphs [0196]-[0197], the avatar object 6 moves in accordance with the operation of the user 5. The user 5 operates the camera object 1541 with the avatar object 6 to photograph the virtual space 11 ... In the example of FIG. 15, a photography range 1542 of the camera object 1541 includes a flower 1543, which is a portion of the panorama image 13 ...; FIG. 16; paragraphs [0200]-[0201], under the state of FIG. 16, the user 5 is visually recognizing a field-of-view image 1617 developed on the monitor of the HMD 120 ... The camera object 1541 has a preview screen. In the example of FIG. 16, the preview screen includes a flower 1543. The computer 200 executes photography in the virtual space 11 when a press of a button 1647 arranged on the camera object 1541 by the hand object 1644 is received. As a result, the computer 200 generates a photograph image ... paragraph [0202], the computer 200 generates a photograph object 1545 representing the photograph image, and arranges the object near the camera object 1541. The computer 200 also moves the photograph object 1545 to the position of the table object 1546) in response to the field of view of the user changing (Paragraphs [0198]-[0199], the computer 200 arranges the photograph object 1545 on a table object 1546 ... The user 5 moves to the table object 1546 in the virtual space 11, and confirms the generated photograph image ...; paragraph [0268], FIG. 25 is a diagram of processing of notifying the user 5A that the photograph object 1545 is arranged at the accumulation place according to at least one embodiment of this disclosure. A field-of-view image 2517 visually recognized by the user 5A includes a table object 1546, a plurality of photograph objects 1545; FIG. 15 shows that the virtual space 11; the flower 1543 of portion of the panorama image 13 and a table object 1546 are arranged at different viewpoint; paragraph [0268], FIG. 25 shows field-of-view image corresponding to the table object 1546).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of inputting a diagnosis result of a diagnosis target detectable for a structure taught by KAWAMURA incorporate the teachings of SAWAKI, and applying the method for displaying and image corresponding to the field of view of the user taught by SAWAKI to generate a virtual image based on the first viewpoint of the user and display the generated image in the virtual space the second viewpoint of the user different the first virtual viewpoint. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify KAWAMURA according to the relied-upon teachings of SAWAKI to obtain the invention as specified in claim.
Regarding claim 21, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 1), and KAJIWARA further disclose wherein an arrangement of subject in the virtual space (FIG. 16; paragraph [0175], In FIG. 21 ... an image of a region set in advance for the position designated in step S103 can be cut from the partial image 5010 and displayed on the cut image display field 6020) is different than the arrangement of the subject in the virtual space in the field of view of the user at a time of the capturing of the image or moving image (Paragraph [0177], FIG. 22 is an example of the screen 500 being displayed when designating a cut region in accordance with an operation on the selection button 6022 according to the first embodiment. As illustrated in FIG. 22, in response to the operation on the selection button 6022, the UI unit 113 displays a frame 603 indicating the cut region on the screen 500).
Regarding claim 22, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 1), and KAJIWARA further disclose wherein a position of the subject in the virtual space (FIG. 16; paragraph [0175], In FIG. 21 ... an image of a region set in advance for the position designated in step S103 can be cut from the partial image 5010 and displayed on the cut image display field 6020) is different than the position of the subject in the virtual space in the field of view of the user at a time of the capturing of the image or moving image (Paragraph [0177], FIG. 22 is an example of the screen 500 being displayed when designating a cut region in accordance with an operation on the selection button 6022 according to the first embodiment. As illustrated in FIG. 22, in response to the operation on the selection button 6022, the UI unit 113 displays a frame 603 indicating the cut region on the screen 500).
Regarding claim 23, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 1), and KAJIWARA further disclose wherein the captured image or moving image is displayed (FIG. 16; paragraph [0175], In FIG. 21 ... an image of a region set in advance for the position designated in step S103 can be cut from the partial image 5010 and displayed on the cut image display field 6020) in a different view than the subject in the virtual space (Paragraph [0177], FIG. 22 is an example of the screen 500 being displayed when designating a cut region in accordance with an operation on the selection button 6022 according to the first embodiment. As illustrated in FIG. 22, in response to the operation on the selection button 6022, the UI unit 113 displays a frame 603 indicating the cut region on the screen 500).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over KAJIWARA (U.S. Patent Application Publication 2019/0306419 A1) in view of SAWAKI et al (U.S. Patent Application Publication 2019/0026950 A1) in view of Baker et al (U.S. Patent No. 11,204,678 B1).
Regarding claim 4, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 3).
However, KAJIWARA does not specifically disclose wherein the circuitry is further configured to perform control to arrange the displayed image or the displayed moving image in the virtual space outside the field of view of the user.
In additional, Baker discloses (Abstract, systems and methods related to user interfaces for object exploration and manipulation within virtual reality environments may include a plurality of user interfaces that are presented responsive to various user inputs or interactions ...) wherein the circuitry is further configured to perform control to arrange the displayed image or the displayed moving image in the virtual space outside the field of view of the user (Col 26, lines 7-40, FIG. 5 is a schematic diagram 500 of an example user interface including various zones for object exploration within a virtual reality environment ... a detail page including a plurality of columns of detail panels may be presented or arranged relative to one or more zones within a field of view of a user of the virtual reality environment. For example, the one or more zones may include a first zone 570, a second zone 572, and a third zone 574 ...; Col 26, lines 54-67 to Col 27, lines 1-5, As shown in FIGS. 4 and 5, the second column 452 of object variations, the third column 453 of object aspects or characteristics, the fourth column 454 of object images, and the fifth column 455 of user reviews may be substantially presented within the first zone 570, based on a determination that the information presented in these columns may be most relevant, important, or useful for a user ... Further, the seventh column 457 of similar, related, or recommended objects and/or any other columns of detail panels that are presented outside a current field of view may be substantially presented within or outside the third zone 574, based on a determination that the information presented in these columns may be additional, ancillary, supplementary, or tertiary for the user).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of inputting a diagnosis result of a diagnosis target detectable for a structure taught by KAJIWARA in view of SAWAKI incorporate the teachings of Baker, and applying the user interface including various zones for object exploration within a virtual reality environment taught by Baker to provide the user interface for displaying and arranging the image outside the current field of view of the user in the virtual environment. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify KAJIWARA in view of SAWAKI according to the relied-upon teachings of Baker to obtain the invention as specified in claim.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over KAJIWARA (U.S. Patent Application Publication 2019/0306419 A1) in view of SAWAKI et al (U.S. Patent Application Publication 2019/0026950 A1) in view of Ahiska et al (U.S. Patent Application Publication 2010/0002070 A1).
Regarding claim 5, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 3).
However, KAJIWARA does not specifically disclose wherein the circuitry is further configured to perform control to arrange the displayed image or the displayed moving image in a place in the virtual space that avoids a position overlapping with a predetermined viewing target in the virtual space.
In additional, Ahiska discloses (Abstract, methods and systems of transmitting a plurality of views from a video camera are disclosed. The camera captures a plurality of views from the lens and scales the view to a specified size. Each view can correspond to a separate virtual camera view of a region of interest and separately controlled ...) wherein the circuitry is further configured to perform control to arrange the displayed image or the displayed moving image in a place in the virtual space FIG. 11; paragraph [0086], superimposed view 1020) that avoids a position overlapping with a predetermined viewing target in the virtual space (Paragraph [0086], superimposed view 1020 over an underlying wide-angle view 1010 within the same frame 1015. Thus, the virtual image “superimposed view 1020” has been placed at left side position and the position avoids the position overlapping with a viewing target “car” in the virtual space “frame 1015”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of inputting a diagnosis result of a diagnosis target detectable for a structure taught by KAJIWARA in view of SAWAKI incorporate the teachings of Ahiska to generate the superimposed view for providing the different arrangement between the image and the subject that avoids a position overlapping with a predetermined viewing target in the virtual space. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify KAJIWARA in view of SAWAKI according to the relied-upon teachings of Ahiska to obtain the invention as specified in claim.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over KAJIWARA (U.S. Patent Application Publication 2019/0306419 A1) in view of SAWAKI et al (U.S. Patent Application Publication 2019/0026950 A1) in view of Sempe et al (U.S. Patent No. 11,250,617 B1) in view of KAWAMURA et al (U.S. Patent Application Publication 2014/0092220 A1).
Regarding claim 9, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 8), and KAJIWARA further disclose wherein among the continuously acquired images or moving images, the circuitry is further configured to arrange an image or a moving image determined to be successfully captured in the virtual space (see claim 8).
However, KAJIWARA does not specifically disclose the image or a moving image captured in the virtual space as a two-dimensional virtual image.
In additional, Sempe discloses (Col 4, lines FIG. 1 is a diagram that illustrates the concept of using a camera control device 102 to control a virtual camera 116 in a virtual 3D environment 114 and capture electronic images (e.g., video and/or images) of the virtual 3D environment 114 using the virtual camera 116 ...; Col 17, lines 39-49, FIG. 10 illustrates a computing device 1010 on which modules of this technology may execute ... The computing device 1010 may include one or more processors 1012 ...; Col 18, lines 28-38, the processor 1012 may represent multiple processors and the memory device 1020 may represent multiple memory units that operate in parallel to the processing circuits ...) the image or a moving image captured in the virtual space as “4:3, 3:2, and 16:9 aspect ratios” (Col 4, lines 57-67, a virtual 3D environment 114 may include virtual objects. The virtual objects may include, but are not limited to: avatars (e.g., a human avatar, animal avatar, fantasy avatar, etc.) and other simulated objects (e.g., landscapes, vegetation, buildings, streets, mountains, enemies, vehicles, etc.). Capturing a virtual 3D environment 114 in electronic images (e.g., video and static images) may include capturing virtual objects within a view of a virtual camera 116 ...; Col 5, lines 55-67 to Col 6, lines 1-24, ... A view 118 may be an observable area of a virtual 3D environment 114 as might be viewed through a simulated camera view finder ... An aspect ratio of a view 118 may be based on an industry standard for which electronic images are to be used. As an illustration, an aspect ratio used generate a view 118 can include the commonly used 4:3, 3:2, and 16:9 aspect ratios ...).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of inputting a diagnosis result of a diagnosis target detectable for a structure taught by KAJIWARA in view of SAWAKI incorporate the teachings of Sempe, and applying the method for controlling a virtual camera in a virtual environment taught by Sempe to capture generate a virtual image used aspect ratios. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify KAJIWARA in view of SAWAKI according to the relied-upon teachings of Sempe to obtain the invention as specified in claim.
However, the combination of KAJIWARA in view of SAWAKI in view of Sempe does not specifically disclose the image “an aspect ratio used generate a view 118 can include the commonly used 4:3, 3:2, and 16:9 aspect ratios” or the moving image as a two-dimensional virtual image.
In additional, KAWAMURA disclose (Paragraph [0002], the present invention relates to an image capturing element which captures a stereoscopic moving image and a planar moving image) the image “an aspect ratio used generate a view 118 can include the commonly used 4:3, 3:2, and 16:9 aspect ratios” or the moving image as a two-dimensional virtual image (Paragraph [0072], the planar moving image of the image capturing apparatus of the embodiment is characterized in that only pixel interleaving of the vertical direction is performed without performing pixel interleaving for the pixels of the horizontal direction of the image capturing element to produce an image having an aspect ratio of 16:9. Accordingly, it becomes possible to obtain the 2D moving image which can be viewed with a large screen of the television receiver having the aspect ratio of 16:9).
It’s noted that KAWAMURA does not describe “image a subject in a virtual space”. However, KAWAMURA describes “produce a 2D image having an aspect ratio of 16:9 based on the captured image and the 2D image can be viewed with a large screen of the television”. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of inputting a diagnosis result of a diagnosis target detectable for a structure taught by KAJIWARA in view of SAWAKI in view of Sempe incorporate the teachings of KAWAMURA, and applying 2D image producing taught by KAWAMURA to generate the 2D image having an aspect ratio from the captured image and display the 2D image in the virtual space. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify KAJIWARA in view of SAWAKI in view of Sempe according to the relied-upon teachings of KAWAMURA to obtain the invention as specified in claim.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over KAJIWARA (U.S. Patent Application Publication 2019/0306419 A1) in view of SAWAKI et al (U.S. Patent Application Publication 2019/0026950 A1) in view of Cragg et al (U.S. Patent Application Publication 2020/0410761 A1).
Regarding claim 10, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 1).
However, KAJIWARA does not specifically disclose wherein the circuitry is further configured to perform notification to notify the user that the captured image or the captured moving image has been arranged.
In additional, Cragg discloses (Paragraph [0039], the photo guide system receives a live stream of images (e.g., a video stream) corresponding to the position and orientation of the image capture device. Each image in the live stream represents a particular view of the real-world environment from the image capture device's perspective. The photo guide system uses this stream of live images to generate AR content, where the AR content comprises a stream of AR images generated based upon the live images ...; FIG. 2; paragraph [0055], the photo guide system 206 includes hardware and/or software configured to guide the capture of an image with preconfigured parameters ...) wherein the circuitry is further configured to perform notification to notify the user that the captured image or the captured moving image has been arranged (FIG. 6A; paragraphs [0094]-[0096], the first user interface for initiating guided image capture 600A includes a live image 601 ... the interface elements include the text “Help me take a photo! Here's how I want it to look.” 602. The interface elements further include a virtual frame 604 showing the position and orientation that depicts the particular view. The interface elements further include an overlay 606 showing a predefined composition corresponding to the particular view (e.g., as defined by a first user via the interface elements illustrated in FIG. 5A) ... The interface elements further include a user-selectable button 608 labeled “Get Started.” Upon detecting user interaction with the button 608, the system initiates guiding the user to align the user device in accordance with the virtual frame 604 and transitions to display interface elements as shown in FIGS. 6C-6F. Thus, the overlay 606 is captured image within the live image 601, the photo guide system uses “button 608 labeled “Get Started.”” to notify the user the captured image has been arranged).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of inputting a diagnosis result of a diagnosis target detectable for a structure taught by KAJIWARA in view of SAWAKI incorporate the teachings of Cragg, and applying a photo guide system taught by Cragg to provide the user interface for displaying and arranging the captured image in the virtual environment and notify the user the image arrangement. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify KAJIWARA in view of SAWAKI according to the relied-upon teachings of Cragg to obtain the invention as specified in claim.
Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over KAJIWARA (U.S. Patent Application Publication 2019/0306419 A1) in view of SAWAKI et al (U.S. Patent Application Publication 2019/0026950 A1) in view of LEE et al (U.S. Patent Application Publication 2018/0288391 A1).
Regarding claim 11, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 1).
However, KAJIWARA does not specifically disclose wherein the circuitry is further configured to add accompanying information regarding the captured image or the captured moving image to the displayed image or the displayed moving image.
In additional, LEE discloses (Abstract, an electronic device including a display; and a controller configured to display a playback screen of virtual reality content on the display, in response to a capture command while displaying the playback screen of the virtual reality content, display a virtual icon on the playback screen, and in response to a touch input applied to the virtual icon, capture an image of a virtual space of the virtual reality content based on a position of a user in the virtual space and corresponding to a touch level of the touch input) wherein the circuitry is further configured to add accompanying information regarding the captured image or the captured moving image to the displayed image or the displayed moving image (FIGS. 1A, 1B and 2; paragraph [0040], the electronic device 100 may be provided with user input units 123a, 123b, 123c manipulated to receive a control command; paragraph [0159], the captured image can be stored in the memory 170 or 270 ...; FIG. 13; paragraphs [0205]-[0207], prior to generating a capture image, time information may be received through the user input units 123a, 123b, 123c in the electronic device 100 or a user input unit of the terminal device 200 electrically connected to the electronic device 100. For example, from 10 a.m. to 2 p.m. may be entered as time information. The entered time information is stored in the memory 170 or the like ... Next, the controller 180 generates a capture image (hereinafter, referred to as “first capture image’) of the virtual space corresponding to a touch level of the touch input applied to the virtual icon ... When there is a capture image, namely, the first capture image, generated during the displayed time, a user position (R1) corresponding to the first capture image may be displayed on the second capture image).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of inputting a diagnosis result of a diagnosis target detectable for a structure taught by KAJIWARA in view of SAWAKI incorporate the teachings of LEE, and applying a method of capturing a virtual space in an electronic device taught by LEE to provide the additional information for the captured image and store in the memory. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify KAJIWARA in view of SAWAKI according to the relied-upon teachings of LEE to obtain the invention as specified in claim.
Regarding claim 12, the combination of KAJIWARA in view of SAWAKI in view of LEE discloses everything claimed as applied above (see claim 11).
However, KAJIWARA does not specifically disclose wherein the accompanying information includes at least information regarding the subject appearing in the captured image or the captured moving image or information regarding a time when the imaging has been performed on a reproduction time axis of content provided in the virtual space.
In additional, LEE discloses wherein the accompanying information includes at least information regarding the subject appearing in the captured image or the captured moving image or information regarding a time (FIG. 13; paragraph [0205], prior to generating a capture image, time information may be received through the user input units 123a, 123b, 123c in the electronic device 100 or a user input unit of the terminal device 200 electrically connected to the electronic device 100. For example, from 10 a.m. to 2 p.m. may be entered as time information. The entered time information is stored in the memory 170 or the like ...) when the imaging has been performed on a reproduction time axis of content provided in the virtual space (Paragraph [206], the controller 180 generates a capture image (hereinafter, referred to as “first capture image’) of the virtual space corresponding to a touch level of the touch input applied to the virtual icon. Then, at the moment when a time corresponding to the entered time information has elapsed, a capture image (hereinafter, referred to as a “second capture image’) of the virtual space in which the user travels for a period of time corresponding to the entered time information is generated).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of inputting a diagnosis result of a diagnosis target detectable for a structure taught by KAJIWARA in view of SAWAKI incorporate the teachings of LEE, and applying a method of capturing a virtual space in an electronic device taught by LEE to provide the time information for capturing and generating the image in the virtual space. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify KAJIWARA in view of SAWAKI according to the relied-upon teachings of LEE to obtain the invention as specified in claim.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over KAJIWARA (U.S. Patent Application Publication 2019/0306419 A1) in view of SAWAKI et al (U.S. Patent Application Publication 2019/0026950 A1) in view of Drobitko et al (U.S. Patent Application Publication 2020/0364881 A1).
Regarding claim 13, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 1).
However, KAJIWARA does not specifically disclose wherein the circuitry is further configured to acquire a an image or moving image excluding at least an additional virtual object overlapping with the subject.
In additional, Drobitko discloses (Abstract, ... The method comprises detecting, by a page detector, an image of a drawing area; initializing a marker-less tracker, wherein said initializing comprises (a) capturing, via the page detector, a frame of the drawing area, (b) displaying, via a graphical user interface (GUI) of the mobile computing device ...) wherein the circuitry is further configured to acquire a an image or moving image (FIG. 3; paragraph [0037], the system 10 includes a page detector 18 for detecting an image of a drawing area, a marker-less tracker 20 for initializing by capturing, via the page detector 18, a frame of the drawing area and displaying, via a graphical user interface (GUI) of the mobile computing device 16, the frame of the drawing area, and uniformly distributing template patches over the frame of the drawing area ...) excluding at least an additional virtual object overlapping with the subject (Paragraph [0039], the processor 12 is adaptable for detecting an out of the frame or an obscured template patch and filtering the out of the frame or the obscured template patch; and updating a contour of the out of the frame or the obscured template patch; paragraph [0051], patches filtering step excludes patches that are out of the paper area or overlapped by some object like hand or pen. Contours of such objects are updated in each frame by external procedures.).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of inputting a diagnosis result of a diagnosis target detectable for a structure taught by KAJIWARA in view of SAWAKI incorporate the teachings of Drobitko, and applying a method of a template patches tracking taught by Drobitko to provide the object detection for detecting an object overlapped the subject in the captured image of the virtual space and filter out the overlapped object from the captured image. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify KAJIWARA in view of SAWAKI according to the relied-upon teachings of Drobitko to obtain the invention as specified in claim.
Claims 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over KAJIWARA (U.S. Patent Application Publication 2019/0306419 A1) in view of SAWAKI et al (U.S. Patent Application Publication 2019/0026950 A1) in view of Tsuda et al (U.S. Patent Application Publication 2009/0303246 A1).
Regarding claim 15, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 1).
However, KAJIWARA does not specifically disclose wherein the circuitry is further configured to move and align a plurality of the displayed images or moving images arranged at predetermined positions to other places in the virtual space.
In additional, Tsuda discloses (Paragraph [0008], in one embodiment of the present invention, the image viewing device may further comprise object position moving means for moving a position of the image object located ahead of the viewpoint in the virtual space. In this embodiment, the object position moving means may move the position of the image object so as to overlap less with another image object when viewed from the view point. Also, the object position moving means may move the position of the image object according to a state of the viewpoint) wherein the circuitry is further configured to move and align a plurality of the displayed images or moving images arranged at predetermined positions to other places in the virtual space (Paragraph [0044], FIG. 3 shows one example of the virtual space. As shown in the drawing, many image objects 52 are placed in the virtual space 50. Each image object 52 is a rectangular object having a thumb nail image ...; paragraph [0060], in the case where data about many images is recorded in the hard disk 38 so that many image objects 52 are placed in the virtual space 50, images of many image objects 52 are included in the space image, including many image objects 50 either partly or entirety covered by other image object 52. In view of the above, it may be arranged such that the respective image objects 52 are maintained in the positions as stored in the space database while the viewpoint 52 is moving, and some or all of the image objects 52 may move and/or be reduced in size to be relocated so as not to be superimposed by other image object in the space image shown on the monitor 26 when the viewpoint 54 has been stopped for more than a predetermined period of time).
It’s noted that Tsuda describes a camera unit inputs a captured images (As shown in FIG. 1.) and does not specifically “imaging a subject in a virtual space”. However, Tsuda discloses a technique for displaying and controlling a plurality of the captured images in the virtual space on the display from a viewpoint. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of inputting a diagnosis result of a diagnosis target detectable for a structure taught by KAJIWARA in view of SAWAKI incorporate the teachings of Tsuda to have a computer control method for a viewpoint in a virtual space, move the position of the displayed image and align the captured images arranged at predetermined positions to other places in the virtual space so as to overlap less with another image object when viewed from the view point. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify KAJIWARA in view of SAWAKI according to the relied-upon teachings of Tsuda to obtain the invention as specified in claim.
Regarding claim 16, the combination of KAJIWARA in view of SAWAKI in view Tsuda of discloses everything claimed as applied above (see claim 15).
However, KAJIWARA does not specifically disclose wherein the circuitry is further configured to store an arrangement state of the plurality of displayed images or moving images before being aligned, and perform control to return the plurality of displayed images or moving images moved to the other places and aligned to a state before the alignment.
In additional, Tsuda discloses wherein the circuitry is further configured to store an arrangement state of the plurality of displayed images or moving images before being aligned (FIGS. 1 and 3; paragraph [0046], FIG. 4 schematically shows the content stored in a space database describing the structure of the virtual space 50. The space database is created, e.g., in the hard disk 38, in which an image ID, or identification of image data, thumb nail image data of the same, original image data of the same, and positional coordinates of an image object 52 related to the same in the virtual space 50 are stored so as to be associated with one another; paragraph [0063], when more image data is recorded in the hard disk 38 and much more image objects 52 are placed in the virtual space 50, the initial positions (hereinafter referred to as a "retracted position") of image objects 52 related to mutually related image data may be determined such that the image objects 52 are aligned in a single line and thus placed concentrated), and perform control to return the plurality of displayed images or moving images moved to the other places and aligned to a state before the alignment (Paragraphs [0066]-[0068], In the process at S102 in FIG. 9, an object extending-retracting process, shown in FIG. 15, may be additionally carried out. In this process, initially, whether or not there exists any image object group 62 located ahead of the viewpoint 54 is determined (S301) ... it is determined whether or not there exists any image object 52 constituting the image object group 62, which is returning to the retracted position thereof as the relevant image object group 62 moves away from the position ahead of the viewpoint 54 (S305). When such an image object 52 exists, the interpolation parameter is adjusted to become closer to 1 so that the image objects 52 constituting the image object group 62 move closer to the retracted positions (S306). With the above arrangement, the respective image objects 52 return to the respective retracted positions thereof as time passes ... the content of the space image may be switched depending on the state of the viewpoint 54).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of inputting a diagnosis result of a diagnosis target detectable for a structure taught by KAJIWARA in view of SAWAKI incorporate the teachings of Tsuda to have a computer control method for a viewpoint in a virtual space, move the position of the displayed image and align the captured images arranged at predetermined positions to other places in the virtual space so as to overlap less with another image object when viewed from the view point. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify KAJIWARA in view of SAWAKI according to the relied-upon teachings of Tsuda to obtain the invention as specified in claim.
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over KAJIWARA (U.S. Patent Application Publication 2019/0306419 A1) in view of SAWAKI et al (U.S. Patent Application Publication 2019/0026950 A1) in view of Sempe et al (U.S. Patent No. 11,250,617 B1).
Regarding claim 17, the combination of KAJIWARA in view of SAWAKI discloses everything claimed as applied above (see claim 1).
However, KAJIWARA does not specifically disclose wherein the circuitry is further configured to perform control to share the displayed image or the displayed moving image arranged in the virtual space to an outside.
In additional, Sempe discloses (Col 4, lines FIG. 1 is a diagram that illustrates the concept of using a camera control device 102 to control a virtual camera 116 in a virtual 3D environment 114 and capture electronic images (e.g., video and/or images) of the virtual 3D environment 114 using the virtual camera 116 ...; Col 17, lines 39-49, FIG. 10 illustrates a computing device 1010 on which modules of this technology may execute ... The computing device 1010 may include one or more processors 1012 ...; Col 18, lines 28-38, the processor 1012 may represent multiple processors and the memory device 1020 may represent multiple memory units that operate in parallel to the processing circuits ...) wherein the circuitry is further configured (Col 11, lines 46-60, FIG. 5 is a block diagram which illustrates an example system 500 for capturing electronic images (e.g., video) of a virtual 3D environment and streaming the electronic images to a plurality of client devices 514a-n to allow spectators to view the electronic images. In one example, the system 500 may support broadcasts of live and/or recorded digital media content captured using a virtual camera which may be controlled via a camera control device 522 ...) to perform control to share the displayed image or the displayed moving image arranged in the virtual space to an outside (Col 12, lines 45-60, a video feed of a video game competition captured using a virtual camera may be sent to a streaming server 504 configured to stream the video to spectator's client devices 514a-n. As an illustration shown in FIG. 6, a camera person, using a display device 602 (e.g., a head mounted display) and a camera control device, may stand on the sidelines of an eSport event taking place in a virtual 3D environment 608 and record live video of the event using a virtual camera 610. A video feed of the live video 620 may be sent to a streaming server, and a video stream 622 of the eSport event may be sent to a plurality of client devices 604 to allow spectators to view the live video 620 of the eSport event).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of inputting a diagnosis result of a diagnosis target detectable for a structure taught by KAJIWARA in view of SAWAKI incorporate the teachings of Sempe, and applying the method for controlling a virtual camera in a virtual environment taught by Sempe to share the captured virtual image to an outside. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify KAJIWARA in view of SAWAKI according to the relied-upon teachings of Sempe to obtain the invention as specified in claim.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Xilin Guo whose telephone number is (571)272-5786. The examiner can normally be reached Monday - Friday 9:00 AM-5:30 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XILIN GUO/Primary Examiner, Art Unit 2616