DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Regarding drawings
Examiner notes that the replacement drawing of fig. 13 is approved herein.
Regarding claim interpretation
Examiner notes that the previously set forth 112(f) claim interpretations are withdrawn herein.
Regarding 35 U.S.C. 112(b)
Examiner notes that the 35 U.S.C. 112(b) rejections previously set forth are withdrawn in view of the amendments to the claims.
Regarding prior art
Applicant’s arguments with respect to claims 1-11, 14-15, and 18-20 have been considered but are moot in view of the new grounds of rejection necessitated by amendment. Specifically, new teachings are relied upon to teach the handheld apparatus, however, examiner will address arguments which remain relevant to the current rejection.
Applicant's arguments filed 12/03/2025 regarding the teachings of Miyachi have been fully considered but they are not persuasive. For example, applicant argues “Miyachi does not describe recognizing the ultrasound probe from only a predetermined partial region of the optical image and generating the trimmed image from only the predetermined partial region” and “in amended claim 1, it is possible to obtain the same effect as in the case where the probe is recognized from all regions of the optical image even without recognizing the ultrasound probe from all regions of the optical image (see, e.g. para [0068])” (REMARKS pg. 14). Examiner respectfully disagrees in that pg. 9 of Miyachi explicitly teaches the probe detection unit 25 can detect the position of the ultrasound probe based on color information of the digital image and absent any special definition of ‘predetermined partial region’ upon which applicant does not appear to rely, the scope of the claim was given its broadest reasonable interpretation such that the position at which the probe is found is considered to be the recited predetermined partial region as Miyachi would only detect the position of the ultrasound probe in a region where the ultrasound probe exists in the digital image (e.g. based on color) and not from other regions. Such a region where the ultrasound probe exists is considered to be a predetermined partial region in its broadest reasonable interpretation as the nature in which something is predetermined is not set forth via claim language. For at least the reasons listed above, applicant’s arguments against the teachings of Miyachi are not found persuasive.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 8-9, 11, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over WIPO Miyachi (WO 2019064706 A1), hereinafter Miyachi in view of Stolka et al. (US 20170116729 A1), hereinafter Stolka.
Regarding claim 1,
Miyachi discloses an ultrasound diagnostic apparatus comprising:
An ultrasound probe (at least fig. 11 (2) and corresponding disclosure in at least pg. 2 last paragraph); and
An apparatus main body (at least fig. 11 (24, 9, 13, 8, and 11) and corresponding disclosure in at least pg. 3) connected to the ultrasound probe (2) (see at least fig. 1),
Wherein the apparatus main body includes:
A processor (24, 13, and 11);
A monitor (at least fig. 11 (8) and corresponding disclosure in at least pg. 4 second paragraph which discloses the touch panel 8 of the ultrasonic diagnostic apparatus 1 has a display screen for displaying the ultrasonic image acquired by the ultrasonic image acquisition unit 21 and the digital image acquired by the digital image acquisition unit) disposed on one surface of the apparatus main body
A first optical camera (9 and pg. 3 which discloses the digital image acquisition unit of the ultrasound diagnostic apparatus captures a state in which the ultrasound probe is in contact with the subject to acquire a digital image. The digital image acquisition unit 9 can use a digital camera incorporated in the ultrasonic diagnostic unit 1. Where examiner notes that a digital camera is understood to be an optical camera)
Wherein the processor is configured to:
generate an ultrasound image from a reception signal (pg. 3 last four paragraphs which disclose the receiving unit of the ultrasound image acquisition unit amplifies a reception signal, transmits the amplified reception signal to the AD conversion unit and converts the reception signal into digitized element data which is output to the image generation unit and the image generation unit has a signal processing unit and dsc and image processing unit connected in series and further discloses “when the ultrasonic image generated by the image generation unit”. Thus the ultrasound image generation unit 21 generates and ultrasound image from a reception signal) obtained by transmitting and receiving an ultrasound beam to and from an examination area of a subject under examination using the ultrasound probe (pg. 3 fourth paragraph disclosing that the ultrasonic probe transmits an ultrasound wave and receives a reflected wave from the subject and outputs a reception signal),
Control the first optical camera to generate an optical image by imaging the subject under examination including the ultrasound probe during examination of the examination area (see at least fig. 12 (D) and corresponding disclosure in at least pg. 9),
recognize the ultrasound probe from the optical image (pg. 9 sixth full paragraph disclosing that the probe detection unit of the processor detects (i.e. recognizes) the position of the ultrasound probe based on the digital image D acquired by the digital image acquisition unit 9. For example, the probe detection unit 25 can detect the position of the ultrasound probe 2 based on color information of the digital image D and last paragraph which discloses detecting the position of the ultrasonic probe from the digital image),
generate a trimmed image (at least fig. 15 (TD) and corresponding disclosure in at least by trimming a region including the ultrasound probe from the optical image in a case where the ultrasound probe is recognized (pg. 9 seventh full paragraph which discloses the trimming unit of the processor generates a trimmed image in which the peripheral portion of the position of the ultrasonic probe detected by the probe detection unit 25 is cut out from the digital image),
and
display the ultrasound image and the trimmed image on the monitor (See at least fig. 16 and pg. 10 second full paragraph which discloses the trimmed image TD can be reduced and displayed superimposed on the ultrasound image U, as with the digital image D in the first embodiment)
wherein the processor recognizes the ultrasound probe from only a predetermined partial region of the optical image (pg. 9-10 disclosing the method of detecting the position of the ultrasound probe based on color information of the digital image and the position of the ultrasound probe in the digital image D is detected by the probe detection unit, where the predetermined partial region of the optical image is broadly recited such that it reads on any region in which the ultrasound probe is present), and
wherein the processor generates the trimmed image from only the predetermined partial region (pg. 10 first paragraph which discloses when the position of the ultrasonic probe 2 in the digital image D is detected by the probe detection unit 25, the trimming unit 26 trims the periphery of the position of the ultrasonic probe 2 from the digital image D as shown in FIG. Generate an image TD. Examiner thus notes that the trimmed image is only from the predetermined partial region in which the ultrasound probe is recognized).
Miyachi fails to explicitly teach wherein the apparatus main body is a handheld terminal apparatus.
Nonetheless, Stolka, in a similar field of endeavor involving ultrasound imaging, teaches an apparatus main body (at least fig. 1 (160) and corresponding disclosure in at least [0063]) including a monitor (at least fig. 1 (230) and corresponding disclosure in at least [0063]) disposed on one surface of the apparatus main body (see at least fig. 1), a first camera (at least fig. 1 (100) and corresponding disclosure in at least [0063]), and a processor (at least fig. 1 (110) and corresponding disclosure in at least [0063]), wherein the apparatus main body is a handheld terminal apparatus ([0063] which discloses handheld unit 160 may be for example, a tablet, a notebook, or smartphone, and one or more cameras 100 and image processing unit 110 may be internal components of handheld unit 160).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified the apparatus main body of Miyachi to be a handheld terminal apparatus as taught by Stolka in order to allow for local, compact, and non-intrusive solutions, i.e. ideal tracking systems for hand-held and compact ultrasound systems used in intervention and point-of-care clinical suites (Stolka [0028]). A person having ordinary skill would have further recognized the benefit of enhanced mobility of the system of Miyachi and allowing for tracking directly from the displays point of view while allowing the operatory to move the display during the imaging procedures as desired.
Regarding claim 8,
Miyachi further teaches wherein the processor recognizes, in the optical image, the ultrasound probe from only a trimmed region set (pg. 9 which discloses trimming unit generates a trimmed image in which the peripheral portion of the position of the ultrasonic probe detected by the probe detection unit is cut out from the digital image) as the predetermined partial region based on an instruction from a user (pg. 4 which discloses an input unit and the like for setting the imaging conditions and pg. 4 which discloses The device control unit 12 of the processor 24 is based on an instruction input by the user via the ultrasonic image operation unit 22 and the digital image operation unit 23 of the touch panel 8 and an operation program stored in the storage unit 13 described later. Thus any trimmed region/predetermined partial region is necessarily based on instructions input by the user in its broadest reasonable interpretation), and
The processor generates the trimmed image from only the trimmed region (pg. 9 which discloses trimming unit generates a trimmed image in which the peripheral portion of the position of the ultrasonic probe detected by the probe detection unit is cut out from the digital image)
Regarding claim 9,
Miyachi further teaches wherein the processor recognizes the ultrasound probe in a state of being held by a user’s hand from the optical image (see at least figs. 13-15).
Regarding claim 11,
Miyachi further teaches wherein the processor displays the optical image on the monitor in a case where the ultrasound probe is not recognized (see at least fig. 12. Examine notes that a person having ordinary skill in the art would have recognized that if the probe is not recognized the system would function to display the optical image as disclosed in pg. 6 “the digital image is displayed superimposed on the end of the ultrasound image U), and displays the trimmed image on the monitor in a case where the ultrasound probe is recognized (see at least fig. 16).
Regarding claim 15,
Miyachi further teaches wherein the processor generates the trimmed image by trimming from the optical image, a region including, in addition to the ultrasound probe, the subject under examination in a range representing a position and an orientation of the ultrasound probe during the examination in the examination area (see at least fig. 15 depicting the trimmed image including a region including the subject under examination in a range representing a position and orientation of the ultrasound probe during the examination)
Claims 2-6 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Miyachi in view of Stolka and Tesar et al. (US 20150018622 A1), hereinafter Tesar.
Regarding claim 2,
Miyachi teaches an ultrasound diagnostic apparatus comprising:
An ultrasound probe (at least fig. 11 (2) and corresponding disclosure in at least pg. 2 last paragraph); and
An apparatus main body (at least fig. 11 (24, 9, 13, 8, and 11) and corresponding disclosure in at least pg. 3) connected to the ultrasound probe (2) (see at least fig. 1),
Wherein the apparatus main body includes:
A processor (24, 13, and 11);
A monitor (at least fig. 11 (8) and corresponding disclosure in at least pg. 4 second paragraph which discloses the touch panel 8 of the ultrasonic diagnostic apparatus 1 has a display screen for displaying the ultrasonic image acquired by the ultrasonic image acquisition unit 21 and the digital image acquired by the digital image acquisition unit) disposed on one surface of the apparatus main body
A first optical camera (9 and pg. 3 which discloses the digital image acquisition unit of the ultrasound diagnostic apparatus captures a state in which the ultrasound probe is in contact with the subject to acquire a digital image. The digital image acquisition unit 9 can use a digital camera incorporated in the ultrasonic diagnostic unit 1. Where examiner notes that a digital camera is understood to be an optical camera)
Wherein the processor is configured to:
generate an ultrasound image from a reception signal (pg. 3 last four paragraphs which disclose the receiving unit of the ultrasound image acquisition unit amplifies a reception signal, transmits the amplified reception signal to the AD conversion unit and converts the reception signal into digitized element data which is output to the image generation unit and the image generation unit has a signal processing unit and dsc and image processing unit connected in series and further discloses “when the ultrasonic image generated by the image generation unit”. Thus the ultrasound image generation unit 21 generates and ultrasound image from a reception signal) obtained by transmitting and receiving an ultrasound beam to and from an examination area of a subject under examination using the ultrasound probe (pg. 3 fourth paragraph disclosing that the ultrasonic probe transmits an ultrasound wave and receives a reflected wave from the subject and outputs a reception signal),
Control the first optical camera to generate an optical image by imaging the subject under examination including the ultrasound probe during examination of the examination area (see at least fig. 12 (D) and corresponding disclosure in at least pg. 9),
recognize the ultrasound probe from the optical image (pg. 9 sixth full paragraph disclosing that the probe detection unit of the processor detects (i.e. recognizes) the position of the ultrasound probe based on the digital image D acquired by the digital image acquisition unit 9. For example, the probe detection unit 25 can detect the position of the ultrasound probe 2 based on color information of the digital image D and last paragraph which discloses detecting the position of the ultrasonic probe from the digital image),
generate a trimmed image (at least fig. 15 (TD) and corresponding disclosure in at least by trimming a region including the ultrasound probe from the optical image in a case where the ultrasound probe is recognized (pg. 9 seventh full paragraph which discloses the trimming unit of the processor generates a trimmed image in which the peripheral portion of the position of the ultrasonic probe detected by the probe detection unit 25 is cut out from the digital image),
and
display the ultrasound image and the trimmed image on the monitor (See at least fig. 16 and pg. 10 second full paragraph which discloses the trimmed image TD can be reduced and displayed superimposed on the ultrasound image U, as with the digital image D in the first embodiment)
wherein the processor recognizes the ultrasound probe from only a predetermined partial region of the optical image (pg. 9-10 disclosing the method of detecting the position of the ultrasound probe based on color information of the digital image and the position of the ultrasound probe in the digital image D is detected by the probe detection unit, where the predetermined partial region of the optical image is broadly recited such that it reads on any region in which the ultrasound probe is present), and
wherein the processor generates the trimmed image from only the predetermined partial region (pg. 10 first paragraph which discloses when the position of the ultrasonic probe 2 in the digital image D is detected by the probe detection unit 25, the trimming unit 26 trims the periphery of the position of the ultrasonic probe 2 from the digital image D as shown in FIG. Generate an image TD. Examiner thus notes that the trimmed image is only from the predetermined partial region in which the ultrasound probe is recognized).
Miyachi fails to explicitly teach wherein the apparatus main body is a handheld terminal, a plurality of optical cameras including the first optical camera and a second optical camera, having different angles of view from each other, the processor configured to select one or more optical cameras from among the plurality of optical cameras based on an instruction from a user, and wherein the plurality of optical cameras are disposed on the one surface of the apparatus main body, or on another surface of the apparatus main body.
Nonetheless, Stolka, in a similar field of endeavor involving ultrasound imaging, teaches an apparatus main body (at least fig. 1 (160) and corresponding disclosure in at least [0063]) including a monitor (at least fig. 1 (230) and corresponding disclosure in at least [0063]) disposed on one surface of the apparatus main body (see at least fig. 1), a plurality of optical cameras, including a first camera and a second camera ([0063] which discloses one or more cameras (thus includes an interpretation of a plurality of cameras including a first camera (100) and a second camera (100)), and a processor (at least fig. 1 (110) and corresponding disclosure in at least [0063]), wherein the apparatus main body is a handheld terminal apparatus ([0063] which discloses handheld unit 160 may be for example, a tablet, a notebook, or smartphone, and one or more cameras 100 and image processing unit 110 may be internal components of handheld unit 160), the plurality of optical cameras are disposed on the one surface of the apparatus main body, or on another surface of the apparatus main body (see at least fig. 1).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified the apparatus main body of Miyachi to be a handheld terminal apparatus as taught by Stolka in order to allow for local, compact, and non-intrusive solutions, i.e. ideal tracking systems for hand-held and compact ultrasound systems used in intervention and point-of-care clinical suites (Stolka [0028]). A person having ordinary skill would have further recognized the benefit of enhanced mobility of the system of Miyachi and allowing for tracking directly from the displays point of view while allowing the operatory to move the display during the imaging procedures as desired.
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Miyachi to include a plurality of cameras to allow for multiple tracking positions, such that the cameras may simultaneously or successively observe the patient/probe (Stolka [0072]), thereby providing enhanced positioning data by providing multiple cameras for tracking the probe accordingly.
It would appear that the first and second optical cameras would have differing angles of view, however, this feature is not explicitly taught by Miyachi, as currently modified. Miyachi, as modified, further fails to explicitly teach wherein the processor is configured to select one or more optical cameras from among the plurality of optical cameras based on an instruction from a user.
Tesar, in a similar field of endeavor involving medical procedures, teaches a plurality of optical cameras (at least fig. 21C (18) and corresponding disclosure in at least [0451]) having different angles of view from each other ([0451] which discloses the images can comprise a camera configured to adjustable to provide varying levels of magnification, viewing angles, monocular or stereo imagery, convergence angles, working distance, or any combination of these),
An apparatus main body includes a processor configured to:
select one or more optical cameras from among the plurality of optical cameras based on an instruction from a user ([0045] which discloses herein the image processing system is configured to: receive the video images acquired by the plurality of cameras; receive input from a user indicating a selection of at least two of the plurality of cameras, said selection being less than all of said cameras in said plurality of cameras; provide output video images based on the video images acquired by the selected cameras),
and control the one or more optical cameras to generate one or more optical images ([0046] which discloses the processor provides output video images based on the video images acquired by the selected cameras. Examiner notes that the processor would further control all of the one or more cameras to generate their corresponding optical images)
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Miyachi to include a plurality of optical cameras having different angles of view from each other and selecting one or more optical cameras as taught by Tesar in order to provide a desired image quality/display capabilities (Tesar [0428]) and to allow a user to select the imagery to be displayed and the manner in which it is displayed for enhanced utility during the procedure (Tesar [0278]). Such a modification would allow for different views to be visualized by a user as desired during the ultrasound procedure of Miyachi thereby enhancing the users visualization accordingly.
Examiner notes that in the modified system, Miyachi teaches the processor generates the trimmed image by trimming a region including the ultrasound probe from one optical image generated by the first camera in a case where the ultrasound probe is recognized, thus generates the trimmed image by trimming a region including the ultrasound probe from one optical image generated by any one of the one or more optical cameras accordingly.
Regarding claim 3,
Miyachi further teaches wherein the processor recognizes the ultrasound probe from the one optical image generated by one optical camera (pg. 9 sixth full paragraph disclosing that the probe detection unit of the processor detects (i.e. recognizes) the position of the ultrasound probe based on the digital image D acquired by the digital image acquisition unit 9. For example, the probe detection unit 25 can detect the position of the ultrasound probe 2 based on color information of the digital image D and last paragraph which discloses detecting the position of the ultrasonic probe from the digital image), and
generates the trimmed image from the one optical image generated by the one optical camera (pg. 9 seventh full paragraph which discloses the trimming unit of the processor generates a trimmed image in which the peripheral portion of the position of the ultrasonic probe detected by the probe detection unit 25 is cut out from the digital image).
Tesar, as applied to claim 2 above further teaches wherein the processor is configured to select one optical camera from among the plurality of optical cameras based on an instruction from the user ([0095] which discloses in some embodiments, the selection can comprise one of the plurality of cameras)
Examiner notes that in the modified system the processor is configured to recognize the ultrasound probe from an optical image generated by the one camera selected by Tesar and is configured to generate the trimmed image form the optical image generated by the one optical camera.
Regarding claim 4,
Miyachi, as modified, teaches the elements of claim 2 as previously stated.
Tesar, as applied to claim 2 above, further teaches wherein the processor selects a first optical camera and a second optical camera from among the plurality of optical cameras based on an instruction from the user ([0045] which discloses herein the image processing system is configured to: receive the video images acquired by the plurality of cameras; receive input from a user indicating a selection of at least two of the plurality of cameras, said selection being less than all of said cameras in said plurality of cameras; provide output video images based on the video images acquired by the selected cameras)
The first optical camera is a standard optical camera that generates a standard optical image by imaging the subject under examination with a first angle of view ([0451] which discloses the images can comprise a camera configured to adjustable to provide varying levels of magnification, viewing angles, monocular or stereo imagery, convergence angles, working distance, or any combination of these [0421] which discloses the surgical visualization system can include one or more, e.g., proximal (or distal), cameras configured to provide stereo imagery of at least a portion of the surgical site, wherein the cameras acquiring stereo imagery can have a field of view that is less than the field of view of the wide field of view cameras)
The second optical camera is a wide-angle camera that generates a wide-angle optical image by imaging the subject under examination with a second angle of view having a wider angle of view than that of the standard camera ([0451] which discloses the images can comprise a camera configured to adjustable to provide varying levels of magnification, viewing angles, monocular or stereo imagery, convergence angles, working distance, or any combination of these [0421] which discloses the surgical visualization system can include one or more proximal cameras configured to provide a relatively wide field of view of a surgical site and in some embodiments, however, the image processing system can provide a wide field of view, for example, by stitching and/or morphing monocular wide field of view camera data).
Miyachi, as currently modified, fails to explicitly teach wherein the processor is configured to recognize the ultrasound probe from each of the standard optical image and the wide-angle optical image, however, examiner notes that since Miyachi is directed to recognizing the probe from an optical image, that it would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Miyachi, as currently modified, to have included recognizing the ultrasound probe from each of the standard optical image and the wide-angle optical image in order to ensure that the probe is tracked by each of the selected cameras such that the user may better visualize the probe data from different points of views/camera angles thereby enhancing the overall diagnostic procedure.
Regarding claim 5,
Miyachi, as modified, teaches the elements of claim 4 as previously stated. Miyachi further teaches wherein the processor generates the trimmed image from an optical image in which the probe is recognized (pg. 9 which discloses trimming unit generates a trimmed image in which the peripheral portion of the position of the ultrasonic probe detected by the probe detection unit is cut out from the digital image).
Examiner thus notes that the modified system would thus function to generate the trimmed image from the standard optical image in a case where the ultrasound probe is recognized from both the standard optical image and the wide-angle optical image (since the ultrasound probe is recognized in the standard optical image) and to generate the trimmed image from the wide-angle optical image in a case where the ultrasound probe is recognized from wide angle optical image and the ultrasound probe is not recognized from the standard optical image.
Regarding claims 6 and 18,
Miyachi, as modified, teaches the elements of claims 4 and 5 as previously stated. Miyachi, as modified, further teaches wherein the processor recognizes the ultrasound probe from the wide-angle optical image, and to recognize the ultrasound probe from the standard optical image in a case where the ultrasound probe is recognized from the wide-angle optical image (See above rejection of claim 4 in which the processor is configured to recognize the ultrasound probe in both the wide-angle optical image and the standard image, thus would function to recognize the ultrasound probe from the standard image in any case including the case where the ultrasound probe is recognized from the wide-angle optical image)
Regarding claim 20,
Miyachi, as modified, teaches the elements of claim 2 as previously stated. Miyachi further teaches wherein the processor recognizes, in the optical image, the ultrasound probe from only a trimmed region set (pg. 9 which discloses trimming unit generates a trimmed image in which the peripheral portion of the position of the ultrasonic probe detected by the probe detection unit is cut out from the digital image) as the predetermined partial region based on an instruction from a user (pg. 4 which discloses an input unit and the like for setting the imaging conditions and pg. 4 which discloses The device control unit 12 of the processor 24 is based on an instruction input by the user via the ultrasonic image operation unit 22 and the digital image operation unit 23 of the touch panel 8 and an operation program stored in the storage unit 13 described later. Thus any trimmed region is necessarily based on instructions input by the user in its broadest reasonable interpretation), and
The processor generates the trimmed image from only the trimmed region (pg. 9 which discloses trimming unit generates a trimmed image in which the peripheral portion of the position of the ultrasonic probe detected by the probe detection unit is cut out from the digital image)
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Miyachi and Stolka, as applied to claim 1 above, and further in view of Choi (US 20190357881 A1), hereinafter Choi.
Regarding claim 7,
Miyachi teaches the elements of claim 1 as previously stated. Miyachi, as modified, fails to explicitly teach wherein the apparatus main body includes a region setting memory configured to store, in the optical image divided into a plurality of regions set in advance, one region set in advance from among the plurality of regions,
Nonetheless, Choi in a similar field of endeavor involving ultrasonic probe tracking, teaches wherein an apparatus main body includes a region setting memory ([0076]-[0077])configured to store, in an optical image divided into a plurality of regions set in advance, one region set in advance from among the plurality of regions ([0099] which discloses the ultrasonic diagnosis device may capture and examinee and detect feature points and may acquire a bone structure by connecting the feature points and split the skeletal bone structure into a plurality of segments and may select one of the plurality of segments as the location information of the probe. See at least fig. 5 depicting the split bone structure. Examiner notes that such segments are from an optical image which captures the examinee ([0129]-[0130]). may select one of the plurality of segments as the location information of the probe. Examiner notes that such selection necessarily sets the region in advance and that the selection is considered to be stored in a memory of the computer in order to be used for the computer to use the information as the location information of the probe)
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Miyachi, as currently modified, to include a region setting memory as taught by Choi in order to reduce the error of the location information of the probe and may allow for an accurate location of the probe with respect to the examinee to be measured (Choi [0102]).
Examiner notes that in the modified system Miyachi would recognize the ultrasound probe from only the one region (since the one region is selected as having the ultrasound probe therein) which would be set as the predetermined partial region and the processor would generate the trimmed image from only the one region (since the one region is selected as having the ultrasound probe therein)
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Miyachi and Stolka, as applied to claim 1 above, and further in view of Weir (US 20170189127 A1), hereinafter Weir.
Regarding claim 10,
Miyachi, as modified, teaches the elements of claim 1 as previously stated.
Miyachi, as modified, fails to explicitly teach wherein the processor issues a notification that the ultrasound probe is not recognized in a case where the ultrasound probe is not recognized.
Weir in a similar field of endeavor of medical procedures teaches processor ([0109] which discloses a notification mechanism 610 (e.g., an 10 interface such as a display, a speaker, a light, etc.)) configured to issue a notification that an instrument is no longer in a field of view of a camera ([0016] if the surgical instrument moves out of the field of view as a result of the adjusting, providing a notification to the user indicating that the surgical instrument has moved out of the field of view).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Miyachi to include issuing a notification as taught by Weir in order to notify a user that the ultrasound probe is no longer in the field of view of the camera, thereby allowing a user to reposition the probe or the camera such that the image may accurately depict the position of the probe with respect to the patient thereby enhancing the overall diagnostic procedure.
Examiner notes that in the modified system a notification that such a notification nis a notification that the ultrasound probe is not recognized (i.e. is not within the field of view of the camera therefore cannot be recognized) in a case where the ultrasound probe is not recognized.
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Miyachi and Stolka, as applied to claim 1 above, and further in view of Errico et al. (US 20240164757 A1), hereinafter Errico.
Regarding claim 14,
Miyachi, as modified, teaches the elements of claim 1 as previously stated. Miyachi further teaches wherein the processor (pg. 5 disclosing device control unit) configured to freeze at least one of the ultrasound image or the optical image (pg. 6 which discloses when the freeze button F is touched by the user, the ultrasound image U of the frame displayed on the display screen is freeze-displayed).
Miyachi, as modified, fails to explicitly teach wherein the at least one optical camera includes a third optical camera disposed on a surface of the apparatus main body on the same side as the monitor, and
Processor is configured to recognize a user’s eye gaze based on an optical image generated by the third optical camera, and
freeze at least one of the ultrasound image or the optical image in a case where an eye gaze behavior set in advance is recognized.
Errico, in a similar field of endeavor involving ultrasound imaging, teaches a third optical camera (at least fig. 1A (18) and corresponding disclosure in at least [0041]) disposed on a surface of an apparatus main body (at least fig. 1A (14) and corresponding disclosure in at least [0041]) on a same side as a monitor (at least fig. 1A (16) and corresponding disclosure in at least [0041]), and
The apparatus main body (14) includes a processor (at least fig. 3 (36) and corresponding disclosure in at least [0073] and [0042] which discloses the smart device 14 includes at least a controller for performing various functions discussed herein)) configured to:
recognize a user’s eye gaze based on an optical image generated by the third camera (Abstract which discloses Eye-gaze focus point locations on the smart device are determined within a combination of image space and control space (26) portions of the display via an image processing framework (36) configured to track the device operator's gaze and eye movement determined from the acquired digital images), and
freeze at least one of an ultrasound image and an optical image in a case where an eye gaze behavior set in advance is recognized ([0073] which discloses Upon predicting the location 102 on the display 16 that the device operator 12 is looking at corresponds with the soft button 28, the command is then executed. In this example, the “save image” command is confirmed and executed and [0012] which discloses in addition, the accumulation of determined or predicted eye-gaze focus point locations further defines a path that comprises a sequence of the determined or predicted eye-gaze focus point locations accumulated over time, wherein the path is identified with an action associated with a corresponding portion of the ultrasound scan protocol and/or ultrasound exam. wherein the action comprises freezing the at least one ultrasound image being displayed in the image space portion of the display).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Miyachi to include a third camera an eye gaze recognition section and freezing the ultrasound image as taught by Errico in order to allow for hands-free control of the ultrasound system. Such a modification would allow for improving the method/system of Miyachi by allowing improved control over the system when both hands are in use (Errico [0006]-[0008]) as well as assisting users during ultrasound scanning procedures, especially in ultrasound guided intervention procedures (Errico [0086]).
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Miyachi, Stolka, and Tesar, as applied to claim 2 above and further in view of Choi.
Regarding claim 19,
Miyachi teaches the elements of claim 2 as previously stated. Miyachi fails to explicitly teach wherein the apparatus main body includes a region setting memory configured to store, in the optical image divided into a plurality of regions set in advance, one region set in advance from among the plurality of regions,
Nonetheless, Choi in a similar field of endeavor involving ultrasonic probe tracking, teaches wherein an apparatus main body includes a region setting memory ([0076]-[0077])configured to store, in an optical image divided into a plurality of regions set in advance, one region set in advance from among the plurality of regions ([0099] which discloses the ultrasonic diagnosis device may capture and examinee and detect feature points and may acquire a bone structure by connecting the feature points and split the skeletal bone structure into a plurality of segments and may select one of the plurality of segments as the location information of the probe. See at least fig. 5 depicting the split bone structure. Examiner notes that such segments are from an optical image which captures the examinee ([0129]-[0130]). may select one of the plurality of segments as the location information of the probe. Examiner notes that such selection necessarily sets the region in advance and that the selection is considered to be stored in a memory of the computer in order to be used for the computer to use the information as the location information of the probe)
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Miyachi, as currently modified, to include a region setting memory as taught by Choi in order to reduce the error of the location information of the probe and may allow for an accurate location of the probe with respect to the examinee to be measured (Choi [0102]).
Examiner notes that in the modified system Miyachi would recognize the ultrasound probe from only the one region (since the one region is selected as having the ultrasound probe therein) set as the predetermined partial region and the trimming section is configured to generate the trimmed image from only the one region (since the one region is selected as having the ultrasound probe therein)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BROOKE L KLEIN whose telephone number is (571)270-5204. The examiner can normally be reached Mon-Fri 7:30-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Kozak can be reached at 5712700552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BROOKE LYN KLEIN/Primary Examiner, Art Unit 3797