Prosecution Insights
Last updated: April 19, 2026
Application No. 18/703,422

MIXED REALITY-BASED ULTRASONIC IMAGE DISPLAY DEVICE, METHOD, AND SYSTEM

Non-Final OA §102§103§112
Filed
Apr 22, 2024
Examiner
VIRK, ADIL PARTAP S
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Korea University Research And Business Foundation
OA Round
3 (Non-Final)
48%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
89%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
102 granted / 213 resolved
-22.1% vs TC avg
Strong +41% interview lift
Without
With
+41.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
44 currently pending
Career history
257
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
38.8%
-1.2% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
31.0%
-9.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 213 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION This office action is in response to the communication received on 11/25/2025 concerning application no. 18/703,422 filed on 04/22/2024. Claims 1, 13-16, and 18 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/25/2025 has been entered. Claims 1, 13-16, and 18 are pending. Response to Arguments Applicant's arguments filed 11/25/2025 have been fully considered but they are not persuasive. Regarding the Tano reference, Applicant argues that it does not teach the amended language. Applicant argues that the reference does not teach the display of the 3D image overlaid onto the affected area of regardless of the probe position. Applicant argues that Tano does not teach the ability to create multiple images based on the probe position or that they remain when the probe moves. Applicant believes that it would be too difficult to create multiple images based on the probe movement and display them. Examiner disagrees. MPEP 716.01(c) establishes “Arguments presented by the applicant cannot take the place of evidence in the record. In re Schulze, 346 F.2d 600, 602, 145 USPQ 716, 718 (CCPA 1965) and In re De Blauwe, 736 F.2d 699, 705, 222 USPQ 191, 196 (Fed. Cir. 1984).” Contrary to Applicant’s allegations, Tano teaches the amended language. Page 464 teaches that the ultrasound image is visible via the HMD based on the image processing and camera and marker position detection. Fig. 7 shows the ultrasound image is collected via the probe and processed for the input of a 3D ultrasound image in the HMD. Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. The 3D visualization compounds the slice images as seen in Fig. 8. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image. Page 464 teaches operability in real time. Page 466 teaches doctors usually check the real-time 3D slice data showing the inside of the patient's body. They may find something wrong in the cross-section image of a blood vessel, so they will want to scan the slice data by moving the probe and viewing them as volumetric visualization. In the example in Fig. 8, the network of the blood vessels is recognized and displayed by the volumetric visualization. The figure shows series of slices that are retained with respect to the patient anatomy even if the probe has moved. That is, the 3D image is generated regardless of the probe position as the probe acquires the information that is used to generated the ultrasound image. The drawing of the 3D image of Tano in Fig. 8 is similar to Applicant’s own Fig. 3 as the ultrasound images are summed as the probe is moved. Applicant’s allegation that it would too difficult to create multiple images based on the probe movement and display them is unpersuasive. Again, MPEP 716.01(c) establishes “Arguments presented by the applicant cannot take the place of evidence in the record. In re Schulze, 346 F.2d 600, 602, 145 USPQ 716, 718 (CCPA 1965) and In re De Blauwe, 736 F.2d 699, 705, 222 USPQ 191, 196 (Fed. Cir. 1984).” Applicant has not provided any objective evidence for such an allegation. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., multiple 3D images) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Assuming, arguendo¸ it was present in the independent claim, the concept of multiple images being acquired is not novel and is instead obvious in light of Buras and the pertinent art cited. Examiner maintains the rejection. Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the “wherein a position of the virtual 3D image is moved according to a first user input to the input device to correct a gap between the position of virtual 3D image and the image of the affected area of the human body” (Claims 1 and 13-14) must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1, 13-16, and 18 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites “obtaining the ultrasound image captured through an ultrasound device in response to a contact by a probe of the ultrasound device to an affected area of a human body”. While the specification discloses image acquisition and the contact of the probe with the body (Paragraphs 0046 and 0064-65), the specification fails to disclose that acquisition being directly response to the contact being made. The specification fails to disclose any considerations or factors on what prompts the acquisition and what is used to assess the contact such that the imaging is prompted as a response. Additionally, this is understood to be a computer-implemented functional limitation which requires disclosure of the underlying algorithm(s) for obtaining the result in order to comply with the written description requirement. See MPEP § 2161.01(I). Therefore, the claim contains subject matter which is not described in the specification in such a way as to reasonably convey to one with ordinary skill in the art that the inventor had possession of the claim invention at the time of filing. Claim 1 recites “even when the probe of the ultrasound device is not pressed to the affected area of the human body”. The cited phraseology clearly signifies a “negative” or “exclusionary” limitation for which the applicants have no support in the original disclosure. Negative limitations in a claim which do not appear in the specification as filed introduce new concepts and violate the description requirement of 35 USC 112(a), Ex Parte Grasselli, Suresh, and Miller, 231 USPQ 393, 394 (Bd. Pat. App. and Inter. 1983); 783 F. 2d 453. The insertion of the above phraseology as described above positively excludes instances of the probe being pressed on the affected area, however, there is no support in the originally filed specification for such exclusions. While the originally filed specification is silent with respect to the use of instances of the probe being pressed on the affected area, it is noted that as stated in MPEP 2173.05(i), the “mere absence of a positive recitation is not the basis for an exclusion.” Claim 1 recites “capturing the virtual 3D image of the affected area overlaid onto the image of the affected area in response to a second user input to the input device for capturing the virtual 3D image overlaid onto the image of the affected area and fixedly displaying the virtual 3D image overlaid onto the image of the affected area even when the probe of the ultrasound device is not pressed to the affected area of the human body”. While paragraphs 0070-76 and 0096-0102 disclose the fixing of the ultrasound image, the specification does not disclose the fixings relationship with respect to the probe or being unaffected by the probe. Therefore, the claim contains subject matter which is not described in the specification in such a way as to reasonably convey to one with ordinary skill in the art that the inventor had possession of the claim invention at the time of filing. Claim 13 recites “obtain an ultrasound image captured through an ultrasound device in response to a contact by a probe of the ultrasound device to an affected area of a human body”. While the specification discloses image acquisition and the contact of the probe with the body (Paragraphs 0046 and 0064-65), the specification fails to disclose that acquisition being directly response to the contact being made. The specification fails to disclose any considerations or factors on what prompts the acquisition and what is used to assess the contact such that the imaging is prompted as a response. Additionally, this is understood to be a computer-implemented functional limitation which requires disclosure of the underlying algorithm(s) for obtaining the result in order to comply with the written description requirement. See MPEP § 2161.01(I). Therefore, the claim contains subject matter which is not described in the specification in such a way as to reasonably convey to one with ordinary skill in the art that the inventor had possession of the claim invention at the time of filing. Claim 13 recites “even when the probe of the ultrasound device is not pressed to the affected area of the human body”. The cited phraseology clearly signifies a “negative” or “exclusionary” limitation for which the applicants have no support in the original disclosure. Negative limitations in a claim which do not appear in the specification as filed introduce new concepts and violate the description requirement of 35 USC 112(a), Ex Parte Grasselli, Suresh, and Miller, 231 USPQ 393, 394 (Bd. Pat. App. and Inter. 1983); 783 F. 2d 453. The insertion of the above phraseology as described above positively excludes instances of the probe being pressed on the affected area, however, there is no support in the originally filed specification for such exclusions. While the originally filed specification is silent with respect to the use of instances of the probe being pressed on the affected area, it is noted that as stated in MPEP 2173.05(i), the “mere absence of a positive recitation is not the basis for an exclusion.” Claim 13 recites “wherein the virtual 3D image of the affected area overlaid onto the image of the affected area is captured in response to a second user input to the input device for capturing the virtual 3D image overlaid onto the image of the affected area and the virtual 3D image overlaid onto the image of the affected area is fixedly displayed even when the probe of the ultrasound device is not pressed to the affected area of the human body”. While paragraphs 0070-76 and 0096-0102 disclose the fixing of the ultrasound image, the specification does not disclose the fixings relationship with respect to the probe or being unaffected by the probe. Therefore, the claim contains subject matter which is not described in the specification in such a way as to reasonably convey to one with ordinary skill in the art that the inventor had possession of the claim invention at the time of filing. Claim 14 recites “an ultrasound device to capture an ultrasound image in response to a contact by a probe of the ultrasound device to an affected area of a human body”. While the specification discloses image acquisition and the contact of the probe with the body (Paragraphs 0046 and 0064-65), the specification fails to disclose that acquisition being directly response to the contact being made. The specification fails to disclose any considerations or factors on what prompts the acquisition and what is used to assess the contact such that the imaging is prompted as a response. Additionally, this is understood to be a computer-implemented functional limitation which requires disclosure of the underlying algorithm(s) for obtaining the result in order to comply with the written description requirement. See MPEP § 2161.01(I). Therefore, the claim contains subject matter which is not described in the specification in such a way as to reasonably convey to one with ordinary skill in the art that the inventor had possession of the claim invention at the time of filing. Claim 14 recites “even when the probe of the ultrasound device is not pressed to the affected area of the human body”. The cited phraseology clearly signifies a “negative” or “exclusionary” limitation for which the applicants have no support in the original disclosure. Negative limitations in a claim which do not appear in the specification as filed introduce new concepts and violate the description requirement of 35 USC 112(a), Ex Parte Grasselli, Suresh, and Miller, 231 USPQ 393, 394 (Bd. Pat. App. and Inter. 1983); 783 F. 2d 453. The insertion of the above phraseology as described above positively excludes instances of the probe being pressed on the affected area, however, there is no support in the originally filed specification for such exclusions. While the originally filed specification is silent with respect to the use of instances of the probe being pressed on the affected area, it is noted that as stated in MPEP 2173.05(i), the “mere absence of a positive recitation is not the basis for an exclusion.” Claim 14 recites “wherein the virtual 3D image of the affected area overlaid onto the image of the affected area is captured in response to a second user input to the input device for capturing the virtual 3D image overlaid onto the image of the affected area and the virtual 3D image overlaid onto the image of the affected area is fixedly displayed even when the probe of the ultrasound device is not pressed to the affected area of the human body”. While paragraphs 0070-76 and 0096-0102 disclose the fixing of the ultrasound image, the specification does not disclose the fixings relationship with respect to the probe or being unaffected by the probe. Therefore, the claim contains subject matter which is not described in the specification in such a way as to reasonably convey to one with ordinary skill in the art that the inventor had possession of the claim invention at the time of filing. Claim 18 recites “even when the probe of the ultrasound device is not pressed to any of the affected areas of the human body”. The cited phraseology clearly signifies a “negative” or “exclusionary” limitation for which the applicants have no support in the original disclosure. Negative limitations in a claim which do not appear in the specification as filed introduce new concepts and violate the description requirement of 35 USC 112(a), Ex Parte Grasselli, Suresh, and Miller, 231 USPQ 393, 394 (Bd. Pat. App. and Inter. 1983); 783 F. 2d 453. The insertion of the above phraseology as described above positively excludes instances of the probe being pressed on the affected area, however, there is no support in the originally filed specification for such exclusions. While the originally filed specification is silent with respect to the use of instances of the probe being pressed on the affected area, it is noted that as stated in MPEP 2173.05(i), the “mere absence of a positive recitation is not the basis for an exclusion.” Claim 14 recites “wherein a plurality of virtual 3D images of affected areas overlaid onto images of the affected areas are captured in response to a plurality of user inputs to the input device for capturing the plurality of virtual 3D images overlaid onto the images of the affected areas and the plurality of virtual 3D images overlaid onto the images of the affected areas are fixedly displayed even when the probe of the ultrasound device is not pressed to any of the affected areas of the human body”. While paragraphs 0070-76 and 0096-0102 disclose the fixing of the ultrasound image, the specification does not disclose the fixings relationship with respect to the probe or being unaffected by the probe. Therefore, the claim contains subject matter which is not described in the specification in such a way as to reasonably convey to one with ordinary skill in the art that the inventor had possession of the claim invention at the time of filing. Claims that are not discussed above but are cited to be rejected under 35 U.S.C. 112(a) are also rejected because they inherit the deficiencies of the claims they respectively depend upon. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 13-16, and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 is indefinite for the following reasons: Recites “obtaining the ultrasound image captured through an ultrasound device in response to a contact by a probe of the ultrasound device to an affected area of a human body”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art if the claim is merely establishing that the ultrasound probe is in contact with the body during imaging or that the imaging commences by prompt when the probe contacts the body. Applicant is encouraged to provide consistent and clear language. Recites “an affected area of a human body”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art in what context the area of the human body is “affected”. One interpretation is it is a diseased part of the body. Another interpretation is that it is an area with a contrast agent. Applicant is encouraged to provide consistent and clear language. Recites “converting the ultrasound image of the affected area into a virtual 3D image of the affected area”. This claim element is indefinite. It would be unclear what the element is attempting to convey as an ultrasound image is a virtual image. That is, an image is “a visual representation of something: such as (1) : a likeness of an object produced on a photographic material (2) : a picture produced on an electronic display (such as a television or computer screen)”1. An image is a virtual representation of what it captures. Given this, it would be unclear what it is converted to. Applicant is encouraged to provide consistent and clear language. Recites “wherein a position of the virtual 3D image is moved according to a first user input to the input device to correct a gap between the position of virtual 3D image and the image of the affected area of the human body”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art what gap is present and in what manner it requires correct. The preceding element as already established that “the virtual 3D image of the affected area overlaid onto an image of the affected area”. If is unclear if the claim is establishing that the preceding element is performed incorrectly. Another interpretation is that this is an attempt to establish a real-time basis of operation. Applicant is encouraged to provide consistent and clear language. Recites “even when the probe of the ultrasound device is not pressed to the affected area of the human body”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art if this element is being actively claimed or not. It would be unclear if the claim requires the probe to not be pressed to the affected area of the body. Applicant is encouraged to provide consistent and clear language. Claim 13 is indefinite for the following reasons: Recites “obtain an ultrasound image captured through an ultrasound device in response to a contact by a probe of the ultrasound device to an affected area of a human body”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art if the claim is merely establishing that the ultrasound probe is in contact with the body during imaging or that the imaging commences by prompt when the probe contacts the body. Applicant is encouraged to provide consistent and clear language. Recites “an affected area of a human body”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art in what context the area of the human body is “affected”. One interpretation is it is a diseased part of the body. Another interpretation is that it is an area with a contrast agent. Applicant is encouraged to provide consistent and clear language. Recites “convert the ultrasound image of the affected area into a virtual 3D image of the affected area”. This claim element is indefinite. It would be unclear what the element is attempting to convey as an ultrasound image is a virtual image. That is, an image is “a visual representation of something: such as (1) : a likeness of an object produced on a photographic material (2) : a picture produced on an electronic display (such as a television or computer screen)”2. An image is a virtual representation of what it captures. Given this, it would be unclear what it is converted to. Applicant is encouraged to provide consistent and clear language. Recites “wherein a position of the virtual 3D image is moved according to a first user input to the input device to correct a gap between the position of virtual 3D image and the image of the affected area of the human body”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art what gap is present and in what manner it requires correct. The preceding element as already established that “wherein the virtual 3D image of the affected area overlaid onto an image of the affected area”. If is unclear if the claim is establishing that the preceding element is performed incorrectly. Another interpretation is that this is an attempt to establish a real-time basis of operation. Applicant is encouraged to provide consistent and clear language. Recites “even when the probe of the ultrasound device is not pressed to the affected area of the human body”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art if this element is being actively claimed or not. It would be unclear if the claim requires the probe to not be pressed to the affected area of the body. Applicant is encouraged to provide consistent and clear language. Claim 14 is indefinite for the following reasons: Recites “an ultrasound device to capture an ultrasound image in response to a contact by a probe of the ultrasound device to an affected area of a human body”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art if the claim is merely establishing that the ultrasound probe is in contact with the body during imaging or that the imaging commences by prompt when the probe contacts the body. Applicant is encouraged to provide consistent and clear language. Recites “an affected area of a human body”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art in what context the area of the human body is “affected”. One interpretation is it is a diseased part of the body. Another interpretation is that it is an area with a contrast agent. Applicant is encouraged to provide consistent and clear language. Recites “convert the ultrasound image into a virtual 3D image of the affected area”. This claim element is indefinite. It would be unclear what the element is attempting to convey as an ultrasound image is a virtual image. That is, an image is “a visual representation of something: such as (1) : a likeness of an object produced on a photographic material (2) : a picture produced on an electronic display (such as a television or computer screen)”3. An image is a virtual representation of what it captures. Given this, it would be unclear what it is converted to. Applicant is encouraged to provide consistent and clear language. Recites “wherein a position of the virtual 3D image is moved according to a first user input to the input device to correct a gap between the position of virtual 3D image and the image of the affected area of the human body”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art what gap is present and in what manner it requires correct. The preceding element as already established that “virtual 3D image of the affected area overlaid onto an image of the affected area”. If is unclear if the claim is establishing that the preceding element is performed incorrectly. Another interpretation is that this is an attempt to establish a real-time basis of operation. Applicant is encouraged to provide consistent and clear language. Recites “even when the probe of the ultrasound device is not pressed to the affected area of the human body”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art if this element is being actively claimed or not. It would be unclear if the claim requires the probe to not be pressed to the affected area of the body. Applicant is encouraged to provide consistent and clear language. Claim 18 is indefinite for the following reasons: Recites “a plurality of virtual 3D images” This claim element is indefinite. It would be unclear to one with ordinary skill in the art if the claim is establishing that these include the virtual 3D image in claim 1 or are separate and distinct. Applicant is encouraged to provide consistent and clear language. Recites “affected areas” This claim element is indefinite. It would be unclear to one with ordinary skill in the art if the claim is establishing that these include the affected area in claim 1 or are separate and distinct. Applicant is encouraged to provide consistent and clear language. Recites “the affected areas”. There is insufficient antecedent basis for this limitation in the claim. Recites “a plurality of user inputs”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art if the claim is establishing that these include the user in claim 1 or are separate and distinct. Furthermore, it is unclear if this is just referring to the two inputs in claim 1. Applicant is encouraged to provide consistent and clear language. Recites “even when the probe of the ultrasound device is not pressed to any of the affected areas of the human body”. This claim element is indefinite. This claim element is indefinite. It would be unclear to one with ordinary skill in the art if this element is being actively claimed or not. It would be unclear if the claim requires the probe to not be pressed to the affected area of the body. Applicant is encouraged to provide consistent and clear language. Claims that are not discussed above but are cited to be rejected under 35 U.S.C. 112(b) are also rejected because they inherit the indefiniteness of the claims they respectively depend upon. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 and 13-15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tano et al. ("Simple Augmented Reality System for 3D Ultrasonic Image by See-through HMD and Single Camera and Marker Combination", 2012). Regarding claim 1, Tano teaches a method of displaying an ultrasound image based on mixed reality on a head-mounted display (HMD), comprising: obtaining the ultrasound image captured through an ultrasound device in response to a contact by a probe of the ultrasound device to an affected area of a human body (Page 466 teaches the probe is moving and slice image is acquired. See Figs. 8-10); obtaining localization information of the probe of the ultrasound device (Page 466 teaches the camera and marker combination detects the relative 3D position between the doctor's head and the probe. Fig. 1 teaches position sensing according to the camera and marker); converting the ultrasound image of the affected area into a virtual 3D image of the affected area (Page 464 teaches that the ultrasound image is visible via the HMD based on the image processing and camera and marker position detection. Fig. 7 shows the ultrasound image is collected via the probe and processed for the input of a 3D ultrasound image in the HMD. Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image); and displaying, on a screen of the HMD, the virtual 3D image of the affected area overlaid onto an image of the affected area while the probe of the ultrasound device is pressed to the affected area of the human body (Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image), wherein the HMD comprises an input device (Fig. 7 provides the system architecture that utilizes the camera marker combination to assess user control), wherein the virtual 3D image is displayed adjacent to an end area of the probe which appears on the image of the affected area of the human body (Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image), and wherein a position of the virtual 3D image is moved according to a first user input to the input device to correct a gap between the position of virtual 3D image and the image of the affected area of the human body (Page 464 teaches that the ultrasound image is visible via the HMD based on the image processing and camera and marker position detection. Fig. 7 shows the ultrasound image is collected via the probe and processed for the input of a 3D ultrasound image in the HMD. Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. The 3D visualization compounds the slice images as seen in Fig. 8. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image. Page 464 teaches operability in real time. Page 466 teaches doctors usually check the real-time 3D slice data showing the inside of the patient's body. They may find something wrong in the cross-section image of a blood vessel, so they will want to scan the slice data by moving the probe and viewing them as volumetric visualization. In the example in Fig. 8, the network of the blood vessels is recognized and displayed by the volumetric visualization); and capturing the virtual 3D image of the affected area overlaid onto the image of the affected area in response to a second user input to the input device for capturing the virtual 3D image overlaid onto the image of the affected area and fixedly displaying the virtual 3D image overlaid onto the image of the affected area even when the probe of the ultrasound device is not pressed to the affected area of the human body (Page 464 teaches that the ultrasound image is visible via the HMD based on the image processing and camera and marker position detection. Fig. 7 shows the ultrasound image is collected via the probe and processed for the input of a 3D ultrasound image in the HMD. Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. The 3D visualization compounds the slice images as seen in Fig. 8. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image. Page 464 teaches operability in real time. Page 466 teaches doctors usually check the real-time 3D slice data showing the inside of the patient's body. They may find something wrong in the cross-section image of a blood vessel, so they will want to scan the slice data by moving the probe and viewing them as volumetric visualization. In the example in Fig. 8, the network of the blood vessels is recognized and displayed by the volumetric visualization). Regarding claim 13, Tano teaches an apparatus for providing ultrasound image display service based on mixed reality, comprising: at least one processor (Fig. 7 shows the system architecture with the PC. It is inherent that a computational system will utilize a processor and memory for the performance of its computational functions) configured to: obtain an ultrasound image captured through an ultrasound device in response to a contact by a probe of the ultrasound device to an affected area of a human body and to obtain localization information of the probe of the ultrasound device (Page 466 teaches the probe is moving and slice image is acquired. See Figs. 8-10. Page 466 teaches the camera and marker combination detects the relative 3D position between the doctor's head and the probe. Fig. 1 teaches position sensing according to the camera and marker); convert the ultrasound image of the affected area into a virtual 3D image of the affected area; and transmit the virtual 3D image of the affected area (Page 464 teaches that the ultrasound image is visible via the HMD based on the image processing and camera and marker position detection. Fig. 7 shows the ultrasound image is collected via the probe and processed for the input of a 3D ultrasound image in the HMD. Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image), transmit the virtual 3D image of the affected area, an image of the affected area including the probe, and the localization information of the probe of the ultrasound device to a head-mounted display (HMD) via a network (Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image. Fig. 7 shows the integration of the camera and ultrasound information into the PC for processing and generation of the 3D ultrasound image that is then implemented in the HMD. Fig. 1 shows the system architecture), wherein the HMD comprises an input device (Fig. 7 provides the system architecture that utilizes the camera marker combination to assess user control), wherein the virtual 3D image of the affected area overlaid onto an image of the affected area is displayed on a screen of the HMD while the probe of the ultrasound device is pressed to the affected area of the human body (Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image), wherein the virtual 3D image is displayed adjacent to an end area of the probe which appears on the image of the affected area of the human body (Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image), wherein a position of the virtual 3D image is moved according to a first user input to the input device to correct a gap between the position of virtual 3D image and the image of the affected area of the human body (Page 464 teaches that the ultrasound image is visible via the HMD based on the image processing and camera and marker position detection. Fig. 7 shows the ultrasound image is collected via the probe and processed for the input of a 3D ultrasound image in the HMD. Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. The 3D visualization compounds the slice images as seen in Fig. 8. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image. Page 464 teaches operability in real time. Page 466 teaches doctors usually check the real-time 3D slice data showing the inside of the patient's body. They may find something wrong in the cross-section image of a blood vessel, so they will want to scan the slice data by moving the probe and viewing them as volumetric visualization. In the example in Fig. 8, the network of the blood vessels is recognized and displayed by the volumetric visualization), and wherein the virtual 3D image of the affected area overlaid onto the image of the affected area is captured in response to a second user input to the input device for capturing the virtual 3D image overlaid onto the image of the affected area and the virtual 3D image overlaid onto the image of the affected area is fixedly displayed even when the probe of the ultrasound device is not pressed to the affected area of the human body (Page 464 teaches that the ultrasound image is visible via the HMD based on the image processing and camera and marker position detection. Fig. 7 shows the ultrasound image is collected via the probe and processed for the input of a 3D ultrasound image in the HMD. Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. The 3D visualization compounds the slice images as seen in Fig. 8. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image. Page 464 teaches operability in real time. Page 466 teaches doctors usually check the real-time 3D slice data showing the inside of the patient's body. They may find something wrong in the cross-section image of a blood vessel, so they will want to scan the slice data by moving the probe and viewing them as volumetric visualization. In the example in Fig. 8, the network of the blood vessels is recognized and displayed by the volumetric visualization). Regarding claim 14, Tano teaches a system for displaying an ultrasound image based on mixed reality, comprising: an ultrasound device to capture an ultrasound image in response to a contact by a probe of the ultrasound device to an affected area of a human body (Page 466 teaches the probe is moving and slice image is acquired. See Figs. 8-10); at least one processor to obtain the ultrasound image and localization information of the probe, and to convert the ultrasound image into a virtual 3D image of the affected area (Page 466 teaches the camera and marker combination detects the relative 3D position between the doctor's head and the probe. Fig. 1 teaches position sensing according to the camera and marker. Page 464 teaches that the ultrasound image is visible via the HMD based on the image processing and camera and marker position detection. Fig. 7 shows the ultrasound image is collected via the probe and processed for the input of a 3D ultrasound image in the HMD. Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image); and a head-mounted display (HMD) to receive the virtual 3D image of the affected area, an image of the affected area and the localization information of the probe from the at least one processor and to display on a screen of the HMD virtual 3D image of the affected area overlaid onto an image of the affected area (Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image), wherein the HMD comprises an input device (Fig. 7 provides the system architecture that utilizes the camera marker combination to assess user control), wherein the virtual 3D image is displayed adjacent to an end area of the probe which appears on the image of the affected area of the human body (Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image), wherein a position of the virtual 3D image is moved according to a first user input to the input device to correct a gap between the position of virtual 3D image and the image of the affected area of the human body (Page 464 teaches that the ultrasound image is visible via the HMD based on the image processing and camera and marker position detection. Fig. 7 shows the ultrasound image is collected via the probe and processed for the input of a 3D ultrasound image in the HMD. Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. The 3D visualization compounds the slice images as seen in Fig. 8. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image. Page 464 teaches operability in real time. Page 466 teaches doctors usually check the real-time 3D slice data showing the inside of the patient's body. They may find something wrong in the cross-section image of a blood vessel, so they will want to scan the slice data by moving the probe and viewing them as volumetric visualization. In the example in Fig. 8, the network of the blood vessels is recognized and displayed by the volumetric visualization), and wherein the virtual 3D image of the affected area overlaid onto the image of the affected area is captured in response to a second user input to the input device for capturing the virtual 3D image overlaid onto the image of the affected area and the virtual 3D image overlaid onto the image of the affected area is fixedly displayed even when the probe of the ultrasound device is not pressed to the affected area of the human body (Page 464 teaches that the ultrasound image is visible via the HMD based on the image processing and camera and marker position detection. Fig. 7 shows the ultrasound image is collected via the probe and processed for the input of a 3D ultrasound image in the HMD. Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. The 3D visualization compounds the slice images as seen in Fig. 8. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image. Page 464 teaches operability in real time. Page 466 teaches doctors usually check the real-time 3D slice data showing the inside of the patient's body. They may find something wrong in the cross-section image of a blood vessel, so they will want to scan the slice data by moving the probe and viewing them as volumetric visualization. In the example in Fig. 8, the network of the blood vessels is recognized and displayed by the volumetric visualization). Regarding claim 15, Tano teaches the system in claim 14, as discussed above. Tano further teaches system, wherein the localization information is measured and transmitted to the at least one processor (Page 466 teaches the camera and marker combination detects the relative 3D position between the doctor's head and the probe. Fig. 1 teaches position sensing according to the camera and marker. Fig. 7 shows the integration of the camera and ultrasound information into the PC for processing and generation of the 3D ultrasound image that is then implemented in the HMD. Fig. 1 shows the system architecture). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Tano et al. ("Simple Augmented Reality System for 3D Ultrasonic Image by See-through HMD and Single Camera and Marker Combination", 2012) in view of Caluser et al. (PGPUB No. US 2015/0051489). Regarding claim 16, Tano teaches the system in claim 15, as discussed above. Tano further teaches a system, wherein the localization information comprises a position information and a path information (Page 465 teaches the camera was also attached to the doctor's glasses to find out whether the probe stays in the doctor's field of view. See Fig. 4 that shows the camera field of view). However, Tano is silent regarding a system, wherein the localization information comprises an angle information and a movement speed information of a tracking device. In an analogous imaging field of endeavor, regarding ultrasound probe tracking, Caluser teaches a system, wherein the localization information comprises a position information, an angle information, a movement speed information, and a path information of a tracking device (Paragraph 0112 teaches the collection of speed, position, and orientation of the probe. Paragraph 0140 teaches that the speed, position, and orientation are continuously monitoring. Paragraph 0137 teaches the assessment and display of the images across the anatomical region. Paragraph 0126 teaches the assessment of the viewing angle. Fig. 12 shows the scan path). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Tano with Caluser’s teaching of localization information based on position, angle, speed, and path information. This modified apparatus would allow the user to improve image quality (Paragraph 0156 of Caluser). Furthermore, the modification will significantly reduce the time of the examination by eliminating the time-consuming manual labeling of images and speeding up the target finding at subsequent examinations (Paragraph 0013 of Caluser). Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Tano et al. ("Simple Augmented Reality System for 3D Ultrasonic Image by See-through HMD and Single Camera and Marker Combination", 2012) in view of Buras et al. (PGPUB No. US 2018/0225993). Regarding claim 18, Tano teaches the method in claim 1, as discussed above. Tano further teaches method, wherein a virtual 3D image of affected areas overlaid onto images of the affected areas are captured in response to a plurality of user inputs to the input device for capturing the virtual 3D image overlaid onto the images of the affected areas and the virtual 3D image overlaid onto the images of the affected areas are fixedly displayed even when the probe of the ultrasound device is not pressed to any of the affected areas of the human body (Page 464 teaches that the ultrasound image is visible via the HMD based on the image processing and camera and marker position detection. Fig. 7 shows the ultrasound image is collected via the probe and processed for the input of a 3D ultrasound image in the HMD. Figs. 8-9 shows the visualization of volumetric information that is based on the moving probe capturing a series of slices. The 3D visualization compounds the slice images as seen in Fig. 8. Fig. 10 shows the view through the HMD of the ultrasound probe where the image is at the probe face end with respect to a traditional image. Page 464 teaches operability in real time. Page 466 teaches doctors usually check the real-time 3D slice data showing the inside of the patient's body. They may find something wrong in the cross-section image of a blood vessel, so they will want to scan the slice data by moving the probe and viewing them as volumetric visualization. In the example in Fig. 8, the network of the blood vessels is recognized and displayed by the volumetric visualization). However, Tano is silent regarding a method, wherein there is a plurality of 3D images. In an analogous imaging field of endeavor, regarding ultrasound probe tracking and mixed reality, Buras teaches a method, wherein a plurality of virtual 3D images of affected areas overlaid onto images of the affected areas are captured in response to a plurality of user inputs to the input device for capturing the plurality of virtual 3D images overlaid onto the images of the affected areas and the plurality of virtual 3D images overlaid onto the images of the affected areas are fixedly displayed even when the probe of the ultrasound device is not pressed to any of the affected areas of the human body (Paragraph 0010 teaches the use of HMD for augmented reality and providing ultrasound feedback. The system assesses the 3D data and the movement, position and orientation of the probe and stores that information in association with 3D ultrasound images. Paragraph 0123 teaches the capture and storage of multiple ultrasound images). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Tano with Buras’s teaching of acquisition of multiple 3D images. This modified method will allow a user to achieve improved diagnostic or treatment outcomes (Abstract of Buras). Furthermore, the modification provides improved training and 3D guidance for novice users that may have limited knowledge of the imaging system (Paragraphs 0003-04 of Buras). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Zhang et al. (PGPUB No. US 2021/0030486): Teaches acquisition of multiple 3D images for AR. Takahashi (PGPUB No. US 2022/0087652): Teaches acquisition of multiple 3D images for AR. Djajadiningrat et al. (PGPUB No. US 2019/0117190): Teaches assessment of 3D images that stay after the movement of the probe. Teaches acquisition of multiple 3D images for AR. Tanaka et al. (US Patent No. 10,456,106): Teaches acquisition of multiple 3D images for AR. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADIL PARTAP S VIRK whose telephone number is (571)272-8569. The examiner can normally be reached Mon-Fri 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached on 571-272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADIL PARTAP S VIRK/Primary Examiner, Art Unit 3798 1 Link: https://www.merriam-webster.com/dictionary/image 2 Link: https://www.merriam-webster.com/dictionary/image 3 Link: https://www.merriam-webster.com/dictionary/image
Read full office action

Prosecution Timeline

Apr 22, 2024
Application Filed
May 21, 2025
Non-Final Rejection — §102, §103, §112
Aug 13, 2025
Response Filed
Aug 26, 2025
Final Rejection — §102, §103, §112
Nov 25, 2025
Request for Continued Examination
Nov 28, 2025
Response after Non-Final Action
Feb 17, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599313
Health Trackers for Autonomous Targeting of Tissue Sampling Sites
2y 5m to grant Granted Apr 14, 2026
Patent 12569221
Systems and Methods for Infrared-Enhanced Ultrasound Visualization
2y 5m to grant Granted Mar 10, 2026
Patent 12569228
ULTRASOUND DIAGNOSTIC APPARATUS AND CONTROL METHOD FOR ULTRASOUND DIAGNOSTIC APPARATUS
2y 5m to grant Granted Mar 10, 2026
Patent 12569304
OPTICAL COHERENCE TOMOGRAPHY GUIDED ROBOTIC OPHTHALMIC PROCEDURES
2y 5m to grant Granted Mar 10, 2026
Patent 12564384
SYSTEM AND METHODS FOR JOINT SCAN PARAMETER SELECTION
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
48%
Grant Probability
89%
With Interview (+41.3%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 213 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month