DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged that application claims priority to foreign application with application number TW113102398 dated 1/22/2024. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Information Disclosure Statement
The IDS dated 7/09/2024 has been considered and placed in the application file.
1st Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 5, 6, and 10 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2025 0139778 A1, (Pappu et al.).
Claim 1
Regarding Claim 1, Pappu et al. teach an image processing system for analyzing an optic cup and optic disc, comprising: a processor, accessing a program to perform the following operations: ("All processes described herein may be embodied in, and fully automated via, software code modules executed by a computing system that includes one or more computers or processors," par. 49) receiving a fundus map, and using an image recognition model to recognize an optic cup and an optic disc in the fundus map, ("Additionally, or optionally, the feature extraction module 116 may be configured with a pair of models to segment an Optic Disc (OD) and an Optic Cup (OC)
PNG
media_image1.png
460
352
media_image1.png
Greyscale
region from the preprocessed fundus retinal image," par. 32) and to mark outlines of the optic cup and the optic disc; ("The post image processing module 122 may be configured to enable the crop out segmented Optic Disc (OD) and the Optic Cup (OC) maps to focus more on the Optic Disc (OD) and the Optic Cup (OC) regions. The post image processing module 122 may be configured to recognize boundary parameters of the Optic Disc (OD) and the Optic [AltContent: textbox (Figure 2 shows post processing images includes the marked boundaries of the optic disc and cup.)]Cup (OC) regions to estimate the plurality of parameters by applying threshold techniques," par. 34) in an interpretation mode, generating interpretation data for risk assessment of eyes, according to at least the outlines of the optic cup and the optic disc; ("The plurality of parameters comprises a vertical Cup-to-Disc Ratio (CDR), a horizontal Cup-to-Disc Ratio (CDR), a Neuro Retinal Rim (NRR) area along with an Inferior, Superior, Nasal, and Temporal parameters (ISNT), and a Papilledema (PPA) to detect the Glaucoma disease. The computing device 102 may be configured to receive the best parameters from the pair of models (118 and 120) and other clinical parameters as input for glaucoma detection and predict the risk score of glaucoma. The patient can check the risk score to take necessary precautions before entering the severe condition," par. 35) wherein the image recognition model is a deep learning model ("The proposed multivariable artificial intelligence system provides a deep learning architecture to segment the optic disc and optic cup," par. 45) which has been trained by using pre-collected fundus maps as training data, ("the control unit 110 may be configured to receive a validation dataset obtained from a plurality of data sources as the reference data," par. 37) and on each of the pre-collected fundus maps, the outlines of the optic cup and the optic disc have been previously marked ("the control unit 110 trains the feature fusion model 118 and the edge extraction model 120 using the input data and the reference data as training data to extract segmented Optic Disc (OD) and the Optic Cup (OC) maps from the input data," par. 38) by an ophthalmologist ("the plurality of data sources comprises at least one, but not limited to, a data warehouse, at least one health care center, a plurality of outpatient and clinical visits data, discharge reports, electronic medical records, picture archiving, and communication systems," par. 39).
It is recognized that the citations and evidence provided above are derived from potentially different embodiments of a single reference. Nevertheless, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to employ combinations and sub-combinations of these complementary embodiments, because Pappu et al. explicitly motivates doing so at least in paragraph [0060] including “It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure” and otherwise motivating experimentation and optimization.
The rejection of system claim 1 above applies mutatis mutandis to the corresponding limitations of method claim 6 while noting that the rejection above cites to both device and method disclosures. Claim 6 is mapped below for clarity of the record and to specify any new limitations not included in claim 1.
Claim 5
Regarding claim 5, Pappu et al. teach the image processing system for analyzing an optic cup and optic disc as claimed in claim 1 as noted above, wherein the interpretation data is a Cup-to-Disc Ratio (CDR), or a Rim-to-Disc Ratio (RDR), which is obtained by the processor based on the outlines of the optic cup and the optic disc ("The boundary parameters comprise boundaries of the Optic Disc (OD) and the Optic Cup (OC), vertical diameters of the OD and OC, centroid values, and thereof to evaluate the plurality of parameters for each of the pair of models. Additionally, or optionally, the parameter selection module 124 may be configured to select the optimum parameters from the pair of models. The plurality of parameters comprises a vertical Cup-to-Disc Ratio (CDR), a horizontal Cup-to-Disc Ratio (CDR)," par. 34-35).
The rejection of system claim 5 above applies mutatis mutandis to the corresponding limitations of method claim 10 while noting that the rejection above cites to both device and method disclosures. Claim 10 is mapped below for clarity of the record and to specify any new limitations not included in claim 5.
Pappu et al. is referenced as per claim 1.
Claim 6
Regarding claim 6, Pappu et al. teach an image processing method for analyzing an optic cup and optic disc, applied to an electronic apparatus, the method comprising: obtaining a fundus map through a processor of the electronic apparatus; ("Additionally, or optionally, the feature extraction module 116 may be configured with a pair of models to segment an Optic Disc (OD) and an Optic Cup (OC) region from the preprocessed fundus retinal image," par. 32) recognizing an optic cup and an optic disc in the fundus map, and marking outlines of the optic cup and the optic disc; ("The post image processing module 122 may be configured to enable the crop out segmented Optic Disc (OD) and the Optic Cup (OC) maps to focus more on the Optic Disc (OD) and the Optic Cup (OC) regions. The post image processing module 122 may be configured to recognize boundary parameters of the Optic Disc (OD) and the Optic Cup (OC) regions to estimate the plurality of parameters by applying threshold techniques," par. 34) in an interpretation mode, using the processor to generate interpretation data for risk assessment of eyes according to at least the outlines of the optic cup and the optic disc; ("The plurality of parameters comprises a vertical Cup-to-Disc Ratio (CDR), a horizontal Cup-to-Disc Ratio (CDR), a Neuro Retinal Rim (NRR) area along with an Inferior, Superior, Nasal, and Temporal parameters (ISNT), and a Papilledema (PPA) to detect the Glaucoma disease. The computing device 102 may be configured to receive the best parameters from the pair of models (118 and 120) and other clinical parameters as input for glaucoma detection and predict the risk score of glaucoma. The patient can check the risk score to take necessary precautions before entering the severe condition," par. 35) wherein the image recognition model is a deep learning model ("The proposed multivariable artificial intelligence system provides a deep learning architecture to segment the optic disc and optic cup," par. 45) which has been trained by using pre-collected fundus maps as training data, ("the control unit 110 may be configured to receive a validation dataset obtained from a plurality of data sources as the reference data," par. 37) and on each of the pre-collected fundus maps, the outlines of the optic cup and the optic disc have been previously marked ("the control unit 110 trains the feature fusion model 118 and the edge extraction model 120 using the input data and the reference data as training data to extract segmented Optic Disc (OD) and the Optic Cup (OC) maps from the input data," par. 38) by an ophthalmologist ("the plurality of data sources comprises at least one, but not limited to, a data warehouse, at least one health care center, a plurality of outpatient and clinical visits data, discharge reports, electronic medical records, picture archiving, and communication systems," par. 39).
Pappu et al. is referenced as per claim 1.
Claim 10
Regarding claim 10, Pappu et al. teach the image processing method for analyzing an optic cup and optic disc as claimed in claim 6 as noted above, wherein the interpretation data is the Cup-to-Disc Ratio (CDR), or the Rim-to-Disc Ratio (RDR), which is obtained by the processor based on the outlines of the optic cup and the optic disc ("The boundary parameters comprise boundaries of the Optic Disc (OD) and the Optic Cup (OC), vertical diameters of the OD and OC, centroid values, and thereof to evaluate the plurality of parameters for each of the pair of models. Additionally, or optionally, the parameter selection module 124 may be configured to select the optimum parameters from the pair of models. The plurality of parameters comprises a vertical Cup-to-Disc Ratio (CDR), a horizontal Cup-to-Disc Ratio (CDR)," par. 34-35).
Pappu et al. is referenced as per claim 1.
2nd Claim Rejections - 35 USC § 103
Claims 2 and 7 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2025 0139778 A1, (Pappu et al.) in view of US Patent Publication 2011 0176108 A1, (Nakagawa et al.).
Claim 2
Regarding Claim 2, Pappu et al. teach the image processing system for analyzing an optic cup and optic disc as claimed in claim 1 as noted above, further comprising, and an input device ("the computing device comprises an input module," par. 12); wherein the processor further performs the following operations: through the input device, receiving instructions to modify, clear, or accept the plurality of outline points, from the ophthalmologist ("The parameter selection module 124 selects the best parameters by calculating the Cup-to-Disc Ratio (CDR), Inferior, Superior, Nasal, and Temporal parameters (ISNT), and entropy parameters of the Multi Spatial Attention Feature Fusion (MSAFF) model and the Multi-Dilated Edge extraction (MDEE) model. Additionally, or optionally, the optimal parameters from the pair of models (118 and 120) from the parameter selection module 124 are input into the control unit 110 by using a hardware description language synthesis through a programming language," par. 35-36) ("The boundary parameters comprise boundaries of the Optic Disc (OD) and the Optic Cup (OC), vertical diameters of the OD and OC, centroid values, and thereof to evaluate the plurality of parameters for each of the pair of models," par. 34).
Pappu et al. do not explicitly teach all of a display device, in a marking mode, marking the outlines of the optic cup and the optic disc with a plurality of outline points, displaying the plurality of outline points on the display device .
[AltContent: textbox (Figure 12 shows the extracted and marked disc and cup regions.)]
PNG
media_image2.png
438
404
media_image2.png
Greyscale
However, Nakagawa et al. teach a display device, in a marking mode, marking the outlines of the optic cup and the optic disc with a plurality of outline points, displaying the plurality of outline points on the display device ("The disk region 500 and the contour line 501 thereof as well as the cup region 510 and the contour line 511 thereof, which are obtained by image processing as described above, may be saved as appended information to the ocular fundus image and recorded to a recording medium such as the hard disk 105. Saved disc regions and cup regions may be displayed in time series on the display 107," par. 83 wherein the contour line is a collection of points).
Therefore, taking the teachings of Pappu et al. and Nakagawa et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the optic cup and disc analyzing methods as taught by Pappu et al. to use the display system as taught by Nakagawa et al. The suggestion/motivation for doing so would have been that, “Saved disc regions and cup regions may be displayed in time series on the display 107. Also, saved disc contour lines and cup contour lines may be displayed on the display 107 to allow for correction of these contour lines if necessary” as noted by the Nakagawa et al. disclosure in paragraph [0083], which also motivates combination because the combination would predictably have more utility as there is a reasonable expectation that the display will allow the professional to observe and edit the disc and cup boundaries; and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
The rejection of system claim 2 above applies mutatis mutandis to the corresponding limitations of method claim 7 while noting that the rejection above cites to both device and method disclosures. Claim 7 is mapped below for clarity of the record and to specify any new limitations not included in claim 2.
Claim 7
Regarding claim 7, Pappu et al. teach the image processing method for analyzing an optic cup and optic disc as claimed in claim 6 as noted above, further comprising: directing the processor to modify, clear, or accept the plurality of outline points, according to instructions from the ophthalmologist, the processor receiving the instructions through an input device of the electronic apparatus ("The parameter selection module 124 selects the best parameters by calculating the Cup-to-Disc Ratio (CDR), Inferior, Superior, Nasal, and Temporal parameters (ISNT), and entropy parameters of the Multi Spatial Attention Feature Fusion (MSAFF) model and the Multi-Dilated Edge extraction (MDEE) model. Additionally, or optionally, the optimal parameters from the pair of models (118 and 120) from the parameter selection module 124 are input into the control unit 110 by using a hardware description language synthesis through a programming language," par. 35-36) ("The boundary parameters comprise boundaries of the Optic Disc (OD) and the Optic Cup (OC), vertical diameters of the OD and OC, centroid values, and thereof to evaluate the plurality of parameters for each of the pair of models," par. 34).
Pappu et al. do not explicitly teach all of using the processor while in a marking mode to mark the outlines of the optic cup and the optic disc with a plurality of outline points, displaying the plurality of outline points on a display device of the electronic apparatus.
However, Nakagawa et al. teach using the processor while in a marking mode to mark the outlines of the optic cup and the optic disc with a plurality of outline points, displaying the plurality of outline points on a display device of the electronic apparatus ("The disk region 500 and the contour line 501 thereof as well as the cup region 510 and the contour line 511 thereof, which are obtained by image processing as described above, may be saved as appended information to the ocular fundus image and recorded to a recording medium such as the hard disk 105. Saved disc regions and cup regions may be displayed in time series on the display 107," par. 83 wherein the contour line is a collection of points).
Pappu et al. and Nakagawa et al. are combined as per claim 2.
3rd Claim Rejections - 35 USC § 103
Claims 3 and 8 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2025 0139778 A1, (Pappu et al.) in view of US Patent Publication 2017 0112372 A1, (Chakravorty et al.).
Claim 3
Regarding Claim 3, Pappu et al. teach the image processing system for analyzing an optic cup and optic disc as claimed in claim 1 as noted above.
Pappu et al. do not explicitly teach all of the image recognition model further recognizes a blood vessel segmentation map from the fundus map, and the image recognition model recognizes the optic disc based on the color difference between the optic disc and the surroundings thereof, or based on the aforementioned color difference and the blood vessel segmentation map simultaneously.
However, Chakravorty et al. teach the image recognition model further recognizes a blood vessel segmentation map from the fundus map, ("The vessel segmentation processor 104 segments or extracts an image of the retinal blood vessels (e.g., arteries and veins) from the retinal fundus image," par. 19) and the image recognition model recognizes the optic disc based on the color difference between the optic disc and the surroundings thereof, or based on the aforementioned color difference and the blood vessel segmentation map simultaneously ("locating the optic disc is achieved by first generating a green channel retinal image from the retinal fundus image. In color/RGB retinal fundus images, the green channel often displays the best contrast between the retinal blood vessels and the background. A non-overlapping sliding window approach may then be applied on the green channel retinal image and on the corresponding binary blood vessel mask to locate the position of the optic disc," par. 27).
Therefore, taking the teachings of Pappu et al. and Chakravorty et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the optic cup and disc analyzing methods as taught by Pappu et al. to use the vessel segmentation processing system as taught by Chakravorty et al. The suggestion/motivation for doing so would have been that, “The feature extractor 108 extracts one or more features from the retinal fundus image, based on the segmented image of the retinal blood vessels and the location of the optic disc. The extracted features are features that can help classify the type of the eye depicted in the retinal fundus image and may include both supervised and unsupervised features” as noted by the Chakravorty et al. disclosure in paragraph [0021], which also motivates combination because the combination would predictably have a higher productivity as there is a reasonable expectation that the use of a blood vessel segmentation image can be used to help identify the location of the optic disc and cup; and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
The rejection of system claim 3 above applies mutatis mutandis to the corresponding limitations of method claim 8 while noting that the rejection above cites to both device and method disclosures. Claim 8 is mapped below for clarity of the record and to specify any new limitations not included in claim 3.
Claim 8
Regarding claim 8, Pappu et al. teach the image processing method for analyzing an optic cup and optic disc as claimed in claim 6 as noted above.
Pappu et al. do not explicitly teach all of the image recognition model is used to further recognize a blood vessel segmentation map from the fundus map, and the image recognition model is used to further recognize the optic disc, based on the color difference between the optic disc and the surroundings thereof, or based on the aforementioned color difference and the blood vessel segmentation map.
However, Chakravorty et al. teach the image recognition model is used to further recognize a blood vessel segmentation map from the fundus map, ("The vessel segmentation processor 104 segments or extracts an image of the retinal blood vessels (e.g., arteries and veins) from the retinal fundus image," par. 19) and the image recognition model is used to further recognize the optic disc, based on the color difference between the optic disc and the surroundings thereof, or based on the aforementioned color difference and the blood vessel segmentation map ("locating the optic disc is achieved by first generating a green channel retinal image from the retinal fundus image. In color/RGB retinal fundus images, the green channel often displays the best contrast between the retinal blood vessels and the background. A non-overlapping sliding window approach may then be applied on the green channel retinal image and on the corresponding binary blood vessel mask to locate the position of the optic disc," par. 27).
Pappu et al. and Chakravorty et al. are combined as per claim 3.
4th Claim Rejections - 35 USC § 103
Claims 4 and 9 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2025 0139778 A1, (Pappu et al.) and US Patent Publication 2017 0112372 A1, (Chakravorty et al.) in view of US Patent Publication 2012 0230564 A1, (Liu et al.).
Claim 4
Regarding Claim 4, Pappu et al. teach the image processing system for analyzing an optic cup and optic disc as claimed in claim 1 as noted above.
Chakravorty et al. teach the image recognition model further recognizes a blood vessel segmentation map corresponding to the fundus map, ("The vessel segmentation processor 104 segments or extracts an image of the retinal blood vessels (e.g., arteries and veins) from the retinal fundus image," par. 19) and the image recognition model further recognizes the optic cup from the optic disc, according to the color difference and shape difference compared with the optic disc ("locating the optic disc is achieved by first generating a green channel retinal image from the retinal fundus image. In color/RGB retinal fundus images, the green channel often displays the best contrast between the retinal blood vessels and the background. A non-overlapping sliding window approach may then be applied on the green channel retinal image and on the corresponding binary blood vessel mask to locate the position of the optic disc," par. 27) ("the start point and the end point of the optic disc are computed by analyzing the gradient of the pixel intensity values, and selecting the highest point along the gradient as the start point and the lowest point along the gradient as the end point. The middle point between the start point and the end point can subsequently be used to compute the center of the optic disc," par. 39 wherein the center of the optic disc is the optic cup).
Pappu et al. and Chakravorty et al. do not explicitly teach all of the image recognition model further recognizes the optic cup from the optic disc, according to the shapes and turns of blood vessels in the blood vessel segmentation map.
However, Liu et al. teach the image recognition model further recognizes the optic cup from the optic disc, according to the shapes and turns of blood vessels in the blood vessel segmentation map ("Kinks are defined as the morphological bending of small blood vessels at the boundary between the optic cup and optic disc and are formed when small vessels cross over from the surrounding disc region into the depression formed by the optic cup. The locations of kinks are thus useful for the assessment of the border of the optic cup to determine the optic cup boundaries. This module makes uses of the techniques explained in [4] to detect scenarios of kinking. Specifically, in late glaucoma cases, the vessel kinking progresses to a form known as the bayoneting of vessels," par. 85).
Therefore, taking the teachings of Pappu et al., Chakravorty et al., and Liu et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to the optic cup and disc analyzing methods as taught by Pappu et al. and the vessel segmentation processing system as taught by Chakravorty et al. to use the optic cup detection methods as taught by Liu et al. The suggestion/motivation for doing so would have been that, “The locations of kinks are thus useful for the assessment of the border of the optic cup to determine the optic cup boundaries” as noted by the Liu et al. disclosure in paragraph [0085], which also motivates combination because the combination would predictably have a higher utility as there is a reasonable expectation that locating kinks in the blood vessels can be used to determine the location of the optic cup; and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
The rejection of system claim 4 above applies mutatis mutandis to the corresponding limitations of method claim 9 while noting that the rejection above cites to both device and method disclosures. Claim 9 is mapped below for clarity of the record and to specify any new limitations not included in claim 4.
Claim 9
Regarding claim 9, Pappu et al. teach the image processing method for analyzing an optic cup and optic disc as claimed in claim 6 as noted above.
Chakravorty et al. teach the image recognition model is used to further recognize a blood vessel cut map corresponding to the fundus map, ("The vessel segmentation processor 104 segments or extracts an image of the retinal blood vessels (e.g., arteries and veins) from the retinal fundus image," par. 19) and the image recognition model is used to further recognize the optic cup in the optic disc, according to the color difference and shape difference from the optic disc ("locating the optic disc is achieved by first generating a green channel retinal image from the retinal fundus image. In color/RGB retinal fundus images, the green channel often displays the best contrast between the retinal blood vessels and the background. A non-overlapping sliding window approach may then be applied on the green channel retinal image and on the corresponding binary blood vessel mask to locate the position of the optic disc," par. 27) ("the start point and the end point of the optic disc are computed by analyzing the gradient of the pixel intensity values, and selecting the highest point along the gradient as the start point and the lowest point along the gradient as the end point. The middle point between the start point and the end point can subsequently be used to compute the center of the optic disc," par. 39 wherein the center of the optic disc is the optic cup).
Pappu et al. and Chakravorty et al. do not explicitly teach all of the image recognition model is used to further recognize the optic cup in the optic disc, according to the shapes and turns of blood vessels in the blood vessel segmentation map.
However, Liu et al. teach the image recognition model is used to further recognize the optic cup in the optic disc, according to the shapes and turns of blood vessels in the blood vessel segmentation map ("Kinks are defined as the morphological bending of small blood vessels at the boundary between the optic cup and optic disc and are formed when small vessels cross over from the surrounding disc region into the depression formed by the optic cup. The locations of kinks are thus useful for the assessment of the border of the optic cup to determine the optic cup boundaries. This module makes uses of the techniques explained in [4] to detect scenarios of kinking. Specifically, in late glaucoma cases, the vessel kinking progresses to a form known as the bayoneting of vessels," par. 85).
Pappu et al., Chakravorty et al., and Liu et al. are combined as per claim 4.
Reference Cited
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
US Patent Publication 2018 0140180 A1 to Coleman discloses identifying age and health of an optic nerve head and its vasculature based on analysis of vector relationships of blood vessels and the neuroretinal rim within an image.
US Patent Publication 2020 0401841 A1 to Lee et al. discloses an image classification neural network configured to learn an extracted ROI and perform classification into a normal fundus image and a glaucoma fundus image on the basis of the learned ROI and a vertical cup-to-disc ratio calculator configured to recognize an optic disc and an optic cup.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KARSTEN F LANTZ whose telephone number is (571) 272-4564. The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Karsten F. Lantz/Examiner, Art Unit 2664
Date: 2/12/2026
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664