Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is responsive to the application filed May 23, 2025 claims 1-13 are presented for examination. Claims 1, 4, 10 and 11 is an independent claim.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119(a)-(d), and based on application # 2020-179750 filed in Japan on October 27, 2020 which papers have been placed of record in the file.
Oath/Declaration
The Office acknowledges receipt of a properly signed Oath/Declaration submitted May 23, 2025.
Information Disclosure Statement
The Applicant’s Information Disclosure Statement filed (May 23, 2025) has been received, entered into the record, and considered.
Drawings
The drawings filed May 23, 2025 are accepted by the examiner.
Abstract
The abstract filed May 23, 2025 is accepted by the examiner.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428,46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046,29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Omum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CPR 3.73(b).
Claims 1-13 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-15 of application No. 18306422 Patent 12007563 B2. Although the conflicting claims are not identical, they are not patentably distinct from each other because the claims recites a display control device of a wearable device including a monitor, the display control device comprising: a memory; and at least one processor that is coupled to or incorporates the memory, and that is configured to: acquire an image to be displayed on the monitor; acquire a first operation instruction that is input through operation of an operation unit, including a graphical user interface, of a display apparatus in order to change a display magnification of the image and that is input through a magnification changing operation performed at the display apparatus, the display apparatus being a separate body from the wearable device; and change the display magnification of the image in accordance with the first operation instruction, wherein the image includes magnification changing operation-related information related to an amount of change in magnification with respect to an operation amount of the magnification changing operation performed during an imaging of the image, and the processor is further configured to match the amount of change in the magnification with respect to the operation amount of the magnification changing operation performed in the imaging, to an amount of change in magnification with respect to an operation amount when changing the display magnification of the image, based on the magnification changing operation-related information, therefore the same limitations as claimed in application No. 18306422 Patent 12007563 B2.
This is an obviousness-type double patenting rejection.
US Application No. 19217890
No. 18306422 Patent 12007563 B2
1. A display control device of a wearable device including a monitor, the display control device comprising: a memory; and at least one processor that is coupled to or incorporates the memory, and that is configured to: acquire an image to be displayed on the monitor; acquire a first operation instruction that is input through operation of an operation unit, including a graphical user interface, of a display apparatus in order to change a display magnification of the image and that is input through a magnification changing operation performed at the display apparatus, the display apparatus being a separate body from the wearable device; and change the display magnification of the image in accordance with the first operation instruction, wherein the image includes magnification changing operation-related information related to an amount of change in magnification with respect to an operation amount of the magnification changing operation performed during an imaging of the image, and the processor is further configured to match the amount of change in the magnification with respect to the operation amount of the magnification changing operation performed in the imaging, to an amount of change in magnification with respect to an operation amount when changing the display magnification of the image, based on the magnification changing operation-related information.
1. A display control device of a wearable device including a monitor, the display control device comprising: a memory; and at least one processor that is coupled to or incorporates the memory, and that is configured to: acquire an image to be displayed on the monitor; acquire a first operation instruction that is input through operation of an operation unit, including a graphical user interface, of an imaging apparatus in order to change a display magnification of the image and that is input through a magnification changing operation performed at the imaging apparatus, the imaging apparatus being a separate body from the wearable device; and change the display magnification of the image in accordance with the first operation instruction, wherein the image includes magnification changing operation-related information related to an amount of change in magnification with respect to an operation amount of the magnification changing operation performed in an imaging at the imaging apparatus that has captured the image, and the processor is further configured to match the amount of change in the magnification with respect to the operation amount of the magnification changing operation performed in the imaging, to an amount of change in magnification with respect to an operation amount when changing the display magnification of the image, based on the magnification changing operation- related information.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
8. Claims 1-9 in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “configured” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “configured” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the words “processor configured to” in claims 1, 3, 4, 5, 6, 8 and 9 with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-13 are rejected under 35 U.S.C. 103 as being unpatentable over You et al. (IDS provided prior art US 11409091 B2) in view of Pierce (IDS provided prior art US 20210306599 Al).
As to Claim 1:
You et al. discloses a display control device of a wearable device including a monitor (You, see Abstract, where You discloses a surgical microscope includes detecting a direction of gaze of a user and detecting an amount of a movement of a head of the user. The movement of the head of the user is of a translatory movement type and a rotatory movement type. Amounts of movements or rotations of the camera or changes in the magnification are performed based on the detected amount of a movement of the head of the user), the display control device comprising: a memory; and at least one processor that is coupled to or incorporates the memory (You, see figure 1), and that is configured to: acquire an image to be displayed on the monitor (You, see 39 in figure 3A and column 4 lines 63-65, where You discloses that the display can be, for example a flat panel display 39 which can be mounted on the support 17, or a head-mounted 65 display 41 carried by the user 33); acquire a first operation instruction that is input (You, see 35 in figure 1) through operation of an operation unit (You, see 11 in figure 1), including a graphical user interface (You, see column 7 lines 56-58, where You discloses that the controller 13 can be set by the user via a suitable interface), of a display apparatus in order to change a display magnification of the image and that is input through a magnification changing operation performed at the display apparatus (You, see figures 3A through 3C), the display apparatus (You, see 3 in figure 1) being a separate body from the wearable device (You, see 41 in figure 1); and change the display magnification of the image in accordance with the first operation instruction (You, see figures 3A through 3C), wherein the image includes magnification changing operation-related information related to a change in magnification with respect to an operation amount of the magnification changing operation performed during an imaging of the image (You, see column 7 lines 16-30, where You discloses that In the method illustrated above with reference to FIGS. 2, 3A, 3B, and 3C, two first and second translatory movements of the cameras are performed in steps 113 and 121. It is, however, possible, to omit the first translatory movement of the camera of step 113 and to perform only one translatory movement of the camera in step 121 so that the intermediate situation shown in FIG. 3B is omitted. The single translatory movement will displace the cameras by an amount corresponding to the sum of the first and second translatory movement amounts illustrated above. Moreover, the rotatory movement and the translatory movement which are performed in subsequent steps 119 and 121 in FIG. 2, can be combined to one step which performs a simultaneous combined translatory and rotatory movement of the cameras between the situations shown in FIG. 3A and FIG. 3C), and the processor is further configured to match the change in the magnification with respect to the operation amount of the magnification changing operation performed in the imaging (You, see figure 4), to a change in magnification with respect to an operation amount when changing the display magnification of the image, based on the magnification changing operation-related information (You, see column 7 lines 15-67).
You differs from the claimed subject matter in that You does not explicitly disclose an amount of change. However in an analogous art, Pierce discloses an amount of change (Pierce, see paragraph [0045], where Pierce discloses that one or more loupe(s) 122 may be used by the user 160 of the head-worn device 130. The loupe(s) 122 may have any magnification depending on the preferences of the user 160 and/or the type of procedure being performed. For example, the loupe(s) 122 may have a magnification of 2.5x, 3.5x, 4x, 4.5x, etc. In some examples, the image data captured by the camera(s) 116 may be captured at a zoom or aspect ratio that mimics the magnification of the loupe(s) 122-whether or not the loupe(s) 122 are actually used (e.g., where loupe(s) 122 aren't used, the image data may still be generated with a field of view that mimics the use of loupe(s) 122). For example, when capturing image data of a medical procedure for streaming and/or later viewing, the viewer ( e.g., a student, physician, interested party, etc.) may desire to view the procedure from the viewpoint of the user 160 of the head-worn device 130. As such, the zoom of the camera(s) 116 may be adjusted such that the field of view captured in the image data is similar to the field of view of the user 160 through the loupe(s) 122, it is noted that a magnification of 2.5x, 3.5x, 4x, 4.5x, etc teaches or suggest an amount of change in the magnification).
It would have been obvious to one of ordinary skill in the art to modify the invention of You with Pierce. One would be motivated to modify You by disclosing amount of change as taught by Pierce, and thereby providing an improved system that comprises a medical loupe system for lighting control, streaming, and augmented reality (AR) assisted procedures (Pierce, see paragraph [0005]).
As to Claim 2:
You in view of Pierce discloses the display control device according to claim 1, wherein the magnification changing operation is based on a zoom magnification changing operation performed during the imaging of the image (Pierce, see figures 1A through 1C and paragraph [0045], where Pierce discloses that one or more loupe(s) 122 may be used by the user 160 of the head-worn device 130. The loupe(s) 122 may have any magnification depending on the preferences of the user 160
and/or the type of procedure being performed. For example, the loupe(s) 122 may have a magnification of2.5x, 3.5x, 4x, 4.5x, etc. In some examples, the image data captured by the camera( s) 116 may be captured at a zoom or aspect ratio that mimics the magnification of the loupe(s) 122-whether or not the loupe(s) 122 are actually used (e.g., where loupe(s) 122 aren't used, the image data may still be generated with a field of view that mimics the use of loupe(s) 122). For example, when capturing image data of a medical procedure for streaming and/or later viewing, the viewer ( e.g., a student, physician, interested party, etc.) may desire to view the procedure from the viewpoint of the user 160 of the head-worn device 130. As such, the zoom of the camera(s) 116 may be adjusted such that the field of view captured in the image data is similar to the field of view of the user 160 through the loupe(s) 122. In some embodiments, the zoom may be optical zoom, while in others, the zoom may be digital zoom. In addition, in some examples, the image quality or aspect ratio may be such that cropping the images may result in a field of view that mimics the field of view of the user 160 through the loupe(s) 122 without any digital or optical zoom).
As to Claim 3:
You in view of Pierce discloses the display control device according to claim 1, wherein the at least one processor is configured to control an amount of change in the display magnification of the image in accordance with an amount of change in magnification with respect to an operation amount of the magnification changing operation (Pierce, see paragraph [0045], where Pierce discloses that any number of different crops may be made to adjust the field of view to mimic views of loupe(s) 122 of varying magnifications. For example, a first user may wish to view the image data ( e.g., video of a
procedure) at a first magnification mimicking 2.5x loupe(s) 122, while a second user may wish to view the image data at a second magnification mimicking 3.5x loupe(s) 122. As such, different crops may be generated to accommodate both users. In other examples, such as where two or more cameras 116 are used, different cameras 116 may capture image data at varying zooms, aspect ratios, and/or qualities to provide different fields of view, magnifications, and/or other parameters
for viewing by users).
As to Claim 4:
You et al. disclose a display control device of a wearable device including a monitor (You, see Abstract, where You discloses a surgical microscope includes detecting a direction of gaze of a user and detecting an amount of a movement of a head of the user. The movement of the head of the user is of a translatory movement type and a rotatory movement type. Amounts of movements or rotations of the camera or changes in the magnification are performed based on the detected amount of a movement of the head of the user), the display control device comprising: a memory; and at least one processor that is coupled to or incorporates the memory (You, see figure 1), and that is configured to: acquire an image to be displayed on the monitor (You, see 39 in figure 3A and column 4 lines 63-65, where You discloses that the display can be, for example a flat panel display 39 which can be mounted on the support 17, or a head-mounted 65 display 41 carried by the user 33); acquire a first operation instruction that is input (You, see 35 in figure 1) through operation of an operation unit (You, see 11 in figure 1) of an display apparatus in order to change a display magnification of the image and that is input through a magnification changing operation performed at the display apparatus (You, see figures 3A through 3C), the display apparatus (You, see 3 in figure 1) being a separate body from the wearable device (You, see 41 in figure 1); change the display magnification of the image in accordance with the first operation instruction (You, see figures 3A through 3C); acquire, as a second operation instruction (You, see figure 4 and column 8 lines 1-8, where You discloses FIG. 4, that rotatory movements in the up-down direction result in greater amounts of rotatory movements of the cameras than equal amounts of rotatory movements of the head in the left-right direction. The different slopes of lines 137 and 131 are provided since the accessible range of head movements in the left-right direction is greater than in the up-down direction, the rotatory movements of the camera at different slopes and angles as shown in figure 4 teaches or suggest angle of view information), a change in a posture of the display apparatus detected by a posture detection unit that detects the posture of the display apparatus; and zoom the image in accordance with the second operation instruction (You, see figure 3A through 3C and column 6 lines 1-14, where You discloses that the corresponding images of the object 32, tools 35 and hand 37 as recorded by the cameras 9 are displayed on the display 39. The cameras 9 are oriented along an axis 123. As shown in FIG. 3A, the object 32, having a circular shape, is not completely positioned in the field of view of the camera. The center of the circular object 32 is displayed at a position of the display 39 which is offset relative to a center 107 of the display 39. It is assumed that the user 33 intends to position the cameras 9 relative to the object 32 such that the object 32 is centered relative to the display 39. For this purpose, the user 33 issues the start command, which is detected by the controller 13 and lets the controller 13 detect the direction of gaze of the user 33. Wherein the position 111 is offset relative to the center 107 of the display 39, and wherein the position 111 on the display 39 is a position onto which the center of the circular object 32 is imaged), wherein the image includes angle-of-view information related to an angle of view when capturing the image You, see figure 4 column 8 lines 1-6, where You discloses that rotatory movements in the up-down direction result in greater amounts of rotatory movements of the cameras than equal amounts of rotatory movements of the head in the left-right direction. The different slopes of lines 137 and 131 are provided since the accessible range of head movements in the left-right direction is greater than in the up-down direction), and the at least one processor is further configured to correct a zoom amount when zooming the image in accordance with the second operation instruction based on the angle-of-view information (You, see figure 4 and column 8 lines 1-8, where You discloses Moreover, it is apparent from FIG. 4, that rotatory movements in the up-down direction result in greater amounts of rotatory movements of the cameras than equal amounts of rotatory movements of the head in the left-right direction. The different slopes of lines 137 and 131 are provided since the accessible range of head movements in the left-right direction is greater than in the up-down direction, the rotatory movements of the camera at different slopes and angles as shown in figure 4 teaches or suggest angle of view information).
You differs from the claimed subject matter in that You does not explicitly disclose an amount of scroll. However in an analogous art, Pierce discloses an amount of scroll (Piece, see 324 in figure 3C and paragraph [0060], One or more control elements 324 may be available within the GUI 320 for controlling the playback (e.g., play, stop, pause, record, fast-forward, rewind, capture screenshot, add notes, add comments, provide commentary, etc.). The client application(s) 150 may include various settings 332, and GUI 330 may allow the user to adjust settings 332 (e.g., settings 332A, 332B, and 332F). For example, the user may be able to adjust the image quality, frame rate, other image or video settings, zoom level, magnification level, video types, login information, account type, audio settings, credential or authentication information, and/or the like).
It would have been obvious to one of ordinary skill in the art to modify the invention of You with Pierce. One would be motivated to modify You by disclosing amount of scroll as taught by Pierce, and thereby providing an improved system that comprises a medical loupe system for lighting control, streaming, and augmented reality (AR) assisted procedures (Pierce, see paragraph [0005]).
As to Claim 5:
You in view of Pierce discloses that the display control device according to claim 1, wherein the at least one processor is mounted in the wearable device, and the at least one processor is configured to: acquire the image from an external server (You, see 39 in figure 3A and column 4 lines 63-65, where You discloses that the display can be, for example a flat panel display 39 which can be mounted on the support 17, it is noted that computer 17 teaches or suggest an external computer/server); and acquire, through the server, the first operation instruction, which is input through the operation of the operation unit of the display apparatus (You, see 3, 11 and 35 in figure 1).
As to Claim 6:
You in view of Pierce discloses the display control device according to claim 1, wherein the at least one processor is mounted in the wearable device, and the at least one processor is configured to: acquire the image from an external server (You, see 39 in figure 3A and column 4 lines 63-65, where You discloses that the display can be, for example a flat panel display 39 which can be mounted on the support 17, it is noted that computer 17 teaches or suggest an external computer/server); and acquire, directly from the display apparatus, the first operation instruction, which is input through the operation of the operation unit of the display apparatus (You, see 3, 11 and 35 in figure 1).
As to Claim 7:
You in view of Pierce discloses the display control device according to claim 1, wherein the image is captured by an imaging apparatus (You, see 3 in figure 1) that is a separate apparatus from the display apparatus (You, see 41 in figure 1).
As to Claim 8:
You in view of Pierce discloses that the display control device according to claim 7, wherein the image includes information about a tilt angle that is an inclination angle of the imaging apparatus with respect to a horizontal direction in capturing the image (You, see figure 4 column 8 lines 1-6, where You discloses that rotatory movements in the up-down direction result in greater amounts of rotatory movements of the cameras than equal amounts of rotatory movements of the head in the left-right direction. The different slopes of lines 137 and 131 are provided since the accessible range of head movements in the left-right direction is greater than in the up-down direction), and the at least one processor is configured to change an initial position of a center position in a case of changing the display magnification of the image based on the tilt angle (You, see figure 4 and figures 3A through 3C).
As to Claim 9:
You in view of Pierce discloses that the display control device according to claim 7, wherein the image includes information about a roll angle that is a rotation angle of the imaging apparatus about an optical axis with respect to a horizontal direction in capturing the image (You, see figure 4 column 8 lines 1-6, where You discloses that rotatory movements in the up-down direction result in greater amounts of rotatory movements of the cameras than equal amounts of rotatory movements of the head in the left-right direction. The different slopes of lines 137 and 131 are provided since the accessible range of head movements in the left-right direction is greater than in the up-down direction), and the at least one processor is configured to correct the image with respect to the horizontal direction based on the roll angle (You, see figure 4 and figures 3A through 3C).
As to Claim 10:
You et al. disclose a display control method of a wearable device including a monitor (You, see Abstract, where You discloses a surgical microscope includes detecting a direction of gaze of a user and detecting an amount of a movement of a head of the user. The movement of the head of the user is of a translatory movement type and a rotatory movement type. Amounts of movements or rotations of the camera or changes in the magnification are performed based on the detected amount of a movement of the head of the user), the display control method comprising: acquiring an image to be displayed on the monitor (You, see 39 in figure 3A and column 4 lines 63-65, where You discloses that the display can be, for example a flat panel display 39 which can be mounted on the support 17, or a head-mounted 65 display 41 carried by the user 33); acquiring a first operation instruction that is input (You, see 35 in figure 1) through operation of an operation unit (You, see 11 in figure 1), including a graphical user interface (You, see column 7 lines 56-58, where You discloses that the controller 13 can be set by the user via a suitable interface), of a display apparatus in order to change a display magnification of the image and that is input through a magnification changing operation performed at the display apparatus (You, see figures 3A through 3C), the display apparatus (You, see 3 in figure 1) being a separate body from the wearable device (You, see 41 in figure 1); and changing the display magnification of the image in accordance with the first operation instruction (You, see figures 3A through 3C), wherein the image includes change in magnification related to an amount of change in magnification with respect to an operation amount of the magnification changing operation (You, see column 7 lines 16-30, where You discloses that In the method illustrated above with reference to FIGS. 2, 3A, 3B, and 3C, two first and second translatory movements of the cameras are performed in steps 113 and 121. It is, however, possible, to omit the first translatory movement of the camera of step 113 and to perform only one translatory movement of the camera in step 121 so that the intermediate situation shown in FIG. 3B is omitted. The single translatory movement will displace the cameras by an amount corresponding to the sum of the first and second translatory movement amounts illustrated above. Moreover, the rotatory movement and the translatory movement which are performed in subsequent steps 119 and 121 in FIG. 2, can be combined to one step which performs a simultaneous combined translatory and rotatory movement of the cameras between the situations shown in FIG. 3A and FIG. 3C.) performed during an imaging of the image (You, see figures 3A through 3C), and the display control method further comprises matching the change magnification with respect to the operation of the magnification changing operation performed in the imaging (You, see column 7 lines 16-30, where You discloses that In the method illustrated above with reference to FIGS. 2, 3A, 3B, and 3C, two first and second translatory movements of the cameras are performed in steps 113 and 121. It is, however, possible, to omit the first translatory movement of the camera of step 113 and to perform only one translatory movement of the camera in step 121 so that the intermediate situation shown in FIG. 3B is omitted. The single translatory movement will displace the cameras by an amount corresponding to the sum of the first and second translatory movement amounts illustrated above. Moreover, the rotatory movement and the translatory movement which are performed in subsequent steps 119 and 121 in FIG. 2, can be combined to one step which performs a simultaneous combined translatory and rotatory movement of the cameras between the situations shown in FIG. 3A and FIG. 3C), to a change in magnification with respect to an operation amount when changing the display magnification of the image (You, see figure 4), based on the magnification changing operation-related information (You, see column 7 lines 15-67).
You differs from the claimed subject matter in that You does not explicitly disclose an amount of change. However in an analogous art, Pierce discloses an amount of change (Pierce, see paragraph [0045], where Pierce discloses that one or more loupe(s) 122 may be used by the user 160 of the head-worn device 130. The loupe(s) 122 may have any magnification depending on the preferences of the user 160 and/or the type of procedure being performed. For example, the loupe(s) 122 may have a magnification of 2.5x, 3.5x, 4x, 4.5x, etc. In some examples, the image data captured by the camera(s) 116 may be captured at a zoom or aspect ratio that mimics the magnification of the loupe(s) 122-whether or not the loupe(s) 122 are actually used (e.g., where loupe(s) 122 aren't used, the image data may still be generated with a field of view that mimics the use of loupe(s) 122). For example, when capturing image data of a medical procedure for streaming and/or later viewing, the viewer ( e.g., a student, physician, interested party, etc.) may desire to view the procedure from the viewpoint of the user 160 of the head-worn device 130. As such, the zoom of the camera(s) 116 may be adjusted such that the field of view captured in the image data is similar to the field of view of the user 160 through the loupe(s) 122, it is noted that a magnification of 2.5x, 3.5x, 4x, 4.5x, etc teaches or suggest an amount of change in the magnification).
It would have been obvious to one of ordinary skill in the art to modify the invention of You with Pierce. One would be motivated to modify You by disclosing amount of change as taught by Pierce, and thereby providing an improved system that comprises a medical loupe system for lighting control, streaming, and augmented reality (AR) assisted procedures (Pierce, see paragraph [0005]).
As to Claim 11:
You et al. discloses a non-transitory storage medium storing a program for executing display control processing by a wearable device including a monitor, the display control processing (You, see Abstract, where You discloses a surgical microscope includes detecting a direction of gaze of a user and detecting an amount of a movement of a head of the user. The movement of the head of the user is of a translatory movement type and a rotatory movement type. Amounts of movements or rotations of the camera or changes in the magnification are performed based on the detected amount of a movement of the head of the user) comprising: acquiring an image to be displayed on the monitor (You, see 39 in figure 3A and column 4 lines 63-65, where You discloses that the display can be, for example a flat panel display 39 which can be mounted on the support 17, or a head-mounted 65 display 41 carried by the user 33); acquiring a first operation instruction that is input (You, see 35 in figure 1) through operation of an operation unit (You, see 11 in figure 1) including a graphical user interface (You, see column 7 lines 56-58, where You discloses that the controller 13 can be set by the user via a suitable interface), of a display apparatus in order to change a display magnification of the image and that is input through a magnification changing operation performed at the display apparatus (You, see figures 3A through 3C), the display apparatus(You, see 3 in figure 1) being a separate body from the wearable device (You, see 41 in figure 1); and changing the display magnification of the image in accordance with the first operation instruction (You, see figures 3A through 3C), wherein the image includes magnification changing operation-related information related to a change in magnification with respect to an operation amount of the magnification changing operation performed during an imaging of the image (You, see column 7 lines 16-30, where You discloses that In the method illustrated above with reference to FIGS. 2, 3A, 3B, and 3C, two first and second translatory movements of the cameras are performed in steps 113 and 121. It is, however, possible, to omit the first translatory movement of the camera of step 113 and to perform only one translatory movement of the camera in step 121 so that the intermediate situation shown in FIG. 3B is omitted. The single translatory movement will displace the cameras by an amount corresponding to the sum of the first and second translatory movement amounts illustrated above. Moreover, the rotatory movement and the translatory movement which are performed in subsequent steps 119 and 121 in FIG. 2, can be combined to one step which performs a simultaneous combined translatory and rotatory movement of the cameras between the situations shown in FIG. 3A and FIG. 3C), and the display control method further comprises matching the change in magnification with respect to the operation amount of the magnification changing operation performed in the imaging (You, see figure 4), to a change in magnification with respect to an operation amount when changing the display magnification of the image, based on the magnification changing operation-related information (You, see column 7 lines 15-67).
You differs from the claimed subject matter in that You does not explicitly disclose an amount of change. However in an analogous art, Pierce discloses an amount of change (Pierce, see paragraph [0045], where Pierce discloses that one or more loupe(s) 122 may be used by the user 160 of the head-worn device 130. The loupe(s) 122 may have any magnification depending on the preferences of the user 160 and/or the type of procedure being performed. For example, the loupe(s) 122 may have a magnification of 2.5x, 3.5x, 4x, 4.5x, etc. In some examples, the image data captured by the camera(s) 116 may be captured at a zoom or aspect ratio that mimics the magnification of the loupe(s) 122-whether or not the loupe(s) 122 are actually used (e.g., where loupe(s) 122 aren't used, the image data may still be generated with a field of view that mimics the use of loupe(s) 122). For example, when capturing image data of a medical procedure for streaming and/or later viewing, the viewer ( e.g., a student, physician, interested party, etc.) may desire to view the procedure from the viewpoint of the user 160 of the head-worn device 130. As such, the zoom of the camera(s) 116 may be adjusted such that the field of view captured in the image data is similar to the field of view of the user 160 through the loupe(s) 122, it is noted that a magnification of 2.5x, 3.5x, 4x, 4.5x, etc teaches or suggest an amount of change in the magnification).
It would have been obvious to one of ordinary skill in the art to modify the invention of You with Pierce. One would be motivated to modify You by disclosing amount of change as taught by Pierce, and thereby providing an improved system that comprises a medical loupe system for lighting control, streaming, and augmented reality (AR) assisted procedures (Pierce, see paragraph [0005]).
As to Claim 12:
You in view of Pierce discloses the display control device according to claim 4, wherein the operation unit of the display apparatus includes a graphical user interface (You, see column 7 lines 56-58, where You discloses that the controller 13 can be set by the user via a suitable interface).
As to Claim 13:
You in view of Pierce discloses the display control device according claim 4, wherein the posture of the display apparatus includes an inclination of the display apparatus (You, see figure 4 column 8 lines 1-6, where You discloses that rotatory movements in the up-down direction result in greater amounts of rotatory movements of the cameras than equal amounts of rotatory movements of the head in the left-right direction. The different slopes of lines 137 and 131 are provided since the accessible range of head movements in the left-right direction is greater than in the up-down direction).
Conclusion
The prior art made of record and not relied upon is considered pertinent to
applicant's disclosure. Natsuyama (US 11003351 B2) discloses an information device and display processing method that allow any position in an image to be enlarged or reduced, and the degree of enlargement or reduction to be performed in an intuitive and intelligible manner The information device receives a predetermined touch operation performed on touch panel, retrieves a second image corresponding to a first image displayed on a display unit in accordance with the location of the touch operation, displays the second image within a ring-shaped frame image in accordance with the position of the touch manipulation, receives a subsequent touch operation of the touch panel along the ring-shaped frame image, and changes the second image in accordance with the operational amount and operational direction of the subsequent touch operation.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NELSON ROSARIO whose telephone number is (571)270-1866. The examiner can normally be reached on Monday through Friday, 7:30am- 5:00pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached on (571) 270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NELSON M ROSARIO/Primary Examiner, Art Unit 2624