Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/03/2025 has been entered.
Status of Claims
This office action is in response to Applicant Amendments and Remarks filed on 09/03/2025, for application number 18/273,589 filed on 07/21/2023, in which claims 11-20 were previously presented for examination.
Claim 11 is amended.
Claims 11-20 are currently pending in this application
Response to Arguments
Applicant Amendments and Remarks filed on 09/03/2025 in response to the Final office action mailed on 07/24/2025 have been fully considered and are addressed as follows:
Regarding the claim rejections under 35 USC §103: With respect to the previous claim rejections under 35 U.S.C. § 103, Applicant has amended the independent claim and the amendment has changed the scope of the original application. Therefore, the Office has supplied new grounds for rejection attached below in the NON-FINAL office action and therefore the prior arguments are considered moot.
NON-FINAL OFFICE ACTION
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 11, 12, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Levinson et al. (US 2018/0136651 A1) in view of NPL-1 (Kuhn, Christopher B., et al. “Introspective failure prediction for semantic image segmentation.” 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2020) further in view of Ebersole et al. (US 6,578,017 B1).
Regarding claim 11, Levinson et al. discloses a method for simplifying a takeover of control of a motor vehicle by a vehicle-external operator (Levinson et al. at para. [0073]: “Autonomous vehicle service platform 401 includes teleoperator 404 (e.g., a teleoperator computing device)”), the method comprising:
in a conditionally automated operation of the motor vehicle (Levinson et al. at para. [0058]: “autonomous vehicle controller 147 is configured to invoke teleoperation services to reduce the likelihood that an autonomous vehicle 109 is delayed in transit while resolving an event or issue that may otherwise affect the safety of the occupants”), acquiring and semantically segmenting images of an environment of the motor vehicle by way of a predetermined trained segmentation model (Levinson et al. at para. [0071]: “perception engine 466 includes an object detector 442,” “Object detector 442 is configured to distinguish objects relative to other features in the environment,” and “perception engine 466 may also perform other perception-related functions, such as segmentation and tracking”; para. [0107]: “Segmentation processor 2310 is configured to extract ground plane data and/or to segment portions of an image to distinguish objects from each other and from static imagery (e.g., background),” “Classifier 2360 is configured to identify an object and to classify that object by classification type ( e.g., as a pedestrian, cyclist, etc.) and by energy/activity (e.g. whether the object is dynamic or static), whereby data representing classification is described by a semantic label,” and “classifier 2360 may apply machine learning techniques to generate perception engine data 2354”; Model training is the primary step in machine learning),
(Levinson et al. at para. [0072]: “In the event that trajectory evaluator 465 has insufficient information to ensure a confidence level high enough to provide collision-free, optimized travel, planner 464 can generate a request to teleoperator 404 for teleoperator support”)
sending the request and the visualization to the vehicle-external operator (Levinson et al. at para. [0072]: “In the event that trajectory evaluator 465 has insufficient information to ensure a confidence level high enough to provide collision-free, optimized travel, planner 464 can generate a request to teleoperator 404 for teleoperator support”; para. [0167]: “teleoperator manager 3907 is configured to present, via user interface 3901, views 3910 (e.g., 3D views) as specific views of portions of physical environment, whereby each of views 3910 may relate to a sensed portion of the environment as generated by a corresponding sensor”).
However, Levinson et al. does not explicitly state based on at least one of the images in each case, predicting errors of the segmentation model,
for an error prediction, automatically generating an image-based visualization that includes an image and visual highlighting of an area of the image corresponding to the error prediction, wherein the visual highlighting is superimposed on the image.
Nevertheless, Levinson et al. at least suggests the idea of a need for a sufficient confidence level to guarantee collision-free travel which enhances a degree of certainty for safety (see Levinson et al. at para. [0087]).
In the same field of endeavor, NPL-1 teaches based on at least one of the images in each case, predicting errors of the segmentation model (NPL-1 at Abstract: “We propose using the concept of introspection to predict the failures of a given semantic segmentation model. A separate introspective model is trained to predict the errors of a given model. This is accomplished by training the given model with the errors made on a set of previous inputs”),
for an error prediction, automatically generating an image-based visualization that includes an image and visual highlighting of an area of the image corresponding to the error prediction (NPL-1 at Fig. 4 and pg. 5, left column: “we visualize the predicted error maps to highlight the differences between uncertainty derived from the model’s output and the failure prediction obtained from a separate introspective module”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Levinson et al. by adding the error prediction as taught by NPL-1 with a reasonable expectation of success. The motivation to modify method of Levinson et al. in view of NPL-1 is to improve the degree of certainty for safety by adding the error prediction as taught by NPL-1 which provides improved accuracy in error prediction.
However, Levinson et al. in view of NPL-1 does not explicitly state wherein the visual highlighting is superimposed on the image.
In the same field of endeavor, Ebersole et al. teaches wherein the visual highlighting is superimposed on the image (Ebersole et al. at col. 3, ln. 51-54: “Target Confidence - The level of confidence that a target object has been correctly detected by an AOD at a particular location in an image, based on that location's distance in the image from the context objects”; col. 10, ln. 6-7: “The present invention can generate image overlays, to show target confidence levels at points in imagery”; col. 10, ln. 28-33: “FIG. 9 shows an example display, consisting of a greyscale image with colored overlay (shading patterns are used instead of colors for the purposes of this patent). The overlaid target confidence information can be divided into two classes: information about targets, and information about regions”; col. 10, ln. 52-57: “A low target confidence symbol 40 changes shape from box to triangle. Alternately or in combination with this, the symbols might be rendered in different colors to show the level of target confidence associated with each. For instance, red, yellow, and green symbols might mark high, medium, and low confidence targets”; col. 11, ln. 1-9: “A target confidence map would provide hints as to where to look. The preferred embodiment is to create this overlay from two colors, for instance red (for high target confidence area 44) and blue (for low target confidence area 42) in FIG. 9, one representing increased target confidence (high target confidence area 44), for instance above 50%, and the other representing reduced confidence (low target confidence area 42). The intensity of each color would be proportional to the strength of target confidence”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Levinson et al. in view of NPL-1 by adding the visual highlighting as taught by Ebersole et al. with a reasonable expectation of success. The motivation to modify method of Levinson et al. in view of NPL-1 further in view of Ebersole et al. is to provide a visual aid for viewers.
Regarding claim 12, Levinson et al. in view of NPL-1 further in view of Ebersole et al. teaches the method according to claim 11.
NPL-1 further teaches wherein: the errors of the segmentation model are predicted pixel by pixel (NPL-1 at Abstract: “the proposed model learns to predict pixel-wise failure probabilities”), a number of the predicted errors and/or an average error is determined based on the errors of the segmentation model for the respective image (NPL-1 at pg. 4, left column: “We use the variance of the resulting five predictions per pixel as the predicted failure score”), and it is checked as the predetermined criterion whether the number of the errors and/or the average error (NPL-1 at pg. 4, left column: “Since less than 10% of the predictions of the baseline model are errors, the error data set is highly imbalanced”; at description of Fig. 4 on pg. 6: “(c), (d) and (e) show the predicted failure probability per pixel for MC dropout, deep ensemble and introspection, darker being a higher probability”).
Levinson et al. further discloses it is checked as the predetermined criterion whether the number (Levinson et al. at para. [0072]: “In the event that trajectory evaluator 465 has insufficient information to ensure a confidence level high enough to provide collision-free, optimized travel, planner 464 can generate a request to teleoperator 404 for teleoperator support”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Levinson et al. in view of NPL-1 further in view of Ebersole et al. by adding the confidence determination using the predetermined confidence threshold value of Levinson et al. replaced by the pixel by pixel prediction of errors of the segmentation model as taught by NPL-1 with a reasonable expectation of success. The confidence level disclosed by Levinson et al. is directly associated with the error probability of NPL-1. The motivation to modify method of Levinson et al. in view of NPL-1 further in view of Ebersole et al. is to provide improved accuracy in prediction of errors of a semantic segmentation model (see NPL-1 at Abstract).
Office Note: The Office interprets the term “an average error” as “an average of any quantifiable parameters of errors.”
Regarding claim 17, Levinson et al. in view of NPL-1 further in view of Ebersole et al. teaches the method according to claim 11.
NPL-1 further teaches wherein: the visualization is generated in a form of a heat map (NPL-1 at Fig. 4 and description of Fig. 4 on pg. 6: “(c), (d) and (e) show the predicted failure probability per pixel for MC dropout, deep ensemble and introspection, darker being a higher probability”; The Fig. 4(e) shows a monochrome heat map).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Levinson et al. in view of NPL-1 further in view of Ebersole et al. by adding the heat map as taught by NPL-1 with a reasonable expectation of success. The motivation to modify method of Levinson et al. in view of NPL-1 further in view of Ebersole et al. is to provide improved visualization of errors of a semantic segmentation model (see NPL-1 at Abstract).
Regarding claim 18, Levinson et al. in view of NPL-1 further in view of Ebersole et al. teaches the method according to claim 11.
Levinson et al. further discloses further comprising: determining which functionality is affected by the errors, and sending the functionality with the request to the vehicle-external operator (Levinson et al. at para. [0145]: “teleoperator manager 3607 may be invoked responsive to message data 3619, which may include a request for one type of many teleoperation services” and “Message data 3619 may also include a request for teleoperator services if, for instance, autonomous vehicle controller 3647 experiences difficulties identifying or classifying one or more objects or obstacles that may affect planning and/or trajectory generation. Message data 3619 may also include a request to monitor the operation of autonomous vehicle 3630 or any component therein (e.g., sensor performance, drivetrain characteristics, battery charge levels, etc.)”).
Regarding claim 19, Levinson et al. in view of NPL-1 further in view of Ebersole et al. teaches the method according to claim 11.
Levinson et al. further discloses an assistance unit for the motor vehicle (Levinson et al. at para. [0061]: “Diagram 300 depicts an interior view of a bidirectional autonomous vehicle 330 that includes sensors”), the assistance unit comprising: an input interface for acquiring the images (Levinson et al. at para. [0061]: “Sensors shown in diagram 300 include image capture sensors 340 (e.g., light capture devices or cameras of any type), audio capture sensors 342 ( e.g., microphones of any type), radar devices 348, sonar devices 341 (or other like sensors, including ultrasonic sensors or acoustic-related sensors), and Lidar devices 346, among other sensor types and modalities”), a data storage unit (Levinson et al. at claim 19: “An autonomous vehicle comprising: one or more processors”), a processor unit (Levinson et al. at claim 19: “memory having stored thereon processor-executable instructions that, when executed by the one or more processors, configure the autonomous vehicle to perform operations”), and an output interface for outputting the request for the takeover of control by the vehicle-external operator and the visualization (Levinson et al. at para. [0171]: “The visualization data may be presented to a display or user interface so that a teleoperator is presented with the visualization data”), wherein the assistance unit is configured to carry out the method according to claim 11 (Levinson et al. at para. [0058]: “autonomous vehicle controller 147 is configured to invoke teleoperation services to reduce the likelihood that an autonomous vehicle 109 is delayed in transit while resolving an event or issue that may otherwise affect the safety of the occupants”).
Regarding claim 20, Levinson et al. in view of NPL-1 further in view of Ebersole et al. teaches the assistance unit according to claim 19.
Levinson et al. further discloses a motor vehicle comprising: a camera for recording the images (Levinson et al. at para. [0061]: “Sensors shown in diagram 300 include image capture sensors 340 (e.g., light capture devices or cameras of any type”), the assistance unit according to claim 19, wherein the assistance unit is connected to the camera (Levinson et al. at para. [0061]: “Diagram 300 depicts an interior view of a bidirectional autonomous vehicle 330 that includes sensors”), and a communication unit for wirelessly sending the request for the takeover of control and the visualization and for wirelessly receiving control signals for control of the motor vehicle (Levinson et al. at para. [0058]: “autonomous vehicle controller 147 is configured to invoke teleoperation services to reduce the likelihood that an autonomous vehicle 109 is delayed in transit while resolving an event or issue that may otherwise affect the safety of the occupants”; para. [0171]: “The visualization data may be presented to a display or user interface so that a teleoperator is presented with the visualization data”; para. [0122]“autonomous vehicle 3230 may establish a wireless communication link 3262 (e.g., via a radio frequency ("RF") signal, such as WiFi or Bluetooth®, including BLE, or the like) for communicating”).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Levinson et al. in view of NPL-1 further in view of Ebersole et al. and Fuelster et al. (US 2023/0386167 A1).
Regarding claim 13, Levinson et al. in view of NPL-1 further in view of Ebersole et al. teaches the method according to claim 11.
NPL-1 further teaches wherein: the errors of the segmentation model are predicted pixel by pixel (NPL-1 at Abstract: “the proposed model learns to predict pixel-wise failure probabilities”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Levinson et al. in view of NPL-1 by adding the pixel by pixel prediction as taught by NPL-1 with a reasonable expectation of success. The motivation to modify method of Levinson et al. in view of NPL-1 is to provide improved accuracy in prediction of errors of a semantic segmentation model (see NPL-1 at Abstract).
However, Levinson et al. in view of NPL-1 further in view of Ebersole et al. does not explicitly state a size of a coherent area of error pixels is determined, and whether the size corresponds at least to a predetermined size threshold value is checked as the predetermined criterion.
Nevertheless, Levinson et al. at least suggests the idea of checking the predetermined criterion as disclosed in para. [0072], “In the event that trajectory evaluator 465 has insufficient information to ensure a confidence level high enough to provide collision-free, optimized travel, planner 464 can generate a request to teleoperator 404 for teleoperator support”.
In the same field of endeavor, Fuelster et al. teaches a size of a coherent area of error pixels is determined, and whether the size corresponds at least to a predetermined size threshold value is checked as the predetermined criterion (Fuelster et al. at para. [0103]: “the amount of uncertainty-and thus the uncertainty scores-that can be determined from the activation levels (i.e. scores) of the elements of the score maps also is the amount of uncertainty of the image pixel(s) that correspond to the respective elements of the score maps”; para. [0105]: “The uncertainty determined from the activation levels can be mapped on the input image pixel matrix according to the size of the inputs or the size of the segment map”; para. [0108]: “The size of a segment representing an unknown object can be determined from the size of a cluster of pixels with a high uncertainty score or by the length of the outline of a field of contiguous pixels having a similar uncertainty score”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Levinson et al. in view of NPL-1 further in view of Ebersole et al. by adding the confidence determination using the predetermined confidence threshold value of Levinson et al. replaced by the determining the size of the coherent area of error pixels of the segmentation model as taught by Fuelster et al. with a reasonable expectation of success. The confidence level disclosed by Levinson et al. is directly associated with the size of the coherent area of error pixels of Fuelster et al. The motivation to modify method of Levinson et al. in view of NPL-1 further in view of Ebersole et al. and Fuelster et al. is to provide a system for detecting and managing uncertainties in perception by using pixelwise analysis of a semantic segmentation model (see Fuelster et al. at Abstract).
Claims 14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Levinson et al. in view of NPL-1 further in view of Ebersole et al. and NPL-3 (Xia, Yingda, et al. “Synthesize then compare: Detecting failures and anomalies for semantic segmentation.” Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16. Springer International Publishing, 2020).
Regarding claim 14, Levinson et al. in view of NPL-1 further in view of Ebersole et al. teaches the method according to claim 11.
However, Levinson et al. in view of NPL-1 further in view of Ebersole et al. does not explicitly state wherein: by way of a predetermined reconstruction model, from a semantic segmentation, the image underlying the semantic segmentation is approximated by generating a corresponding reconstruction image and the respective visualization is generated based on the reconstruction image.
In the same field of endeavor, NPL-3 teaches wherein: by way of a predetermined reconstruction model, from a semantic segmentation, the image underlying the semantic segmentation is approximated by generating a corresponding reconstruction image and the respective visualization is generated based on the reconstruction image (NPL-3 at pg. 146: “This framework consists of two components: an image synthesis module, which synthesizes an image from a segmentation result to reconstruct its input image, i.e., a reverse procedure of semantic segmentation, and a comparison module which computes the difference between the reconstructed image and the input image” and “Presumably the converse is also true, the better is the segmentation result, the closer a synthesized image generated from the segmentation result is to the input image”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Levinson et al. in view of NPL-1 further in view of Ebersole et al. by adding the generating the corresponding reconstruction image as taught by NPL-3 with a reasonable expectation of success. The motivation to modify method of Levinson et al. in view of NPL-1 further in view of Ebersole et al. and NPL-3 is to provide reliable system for detecting failures and anomalies for safety-critical applications of semantic segmentation (see NPL-3 at Abstract).
Regarding claim 16, Levinson et al. in view of NPL-1 further in view of Ebersole et al. and NPL-3 teaches the method according to claim 14.
NPL-3 further teaches wherein: to predict the errors, the reconstruction image is compared to the respective underlying acquired image and the errors are predicted based on detected differences (NPL-3 at pg. 147: “Consequently, the anomalous object can be identified by finding the differences between the test image and the synthesized image”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Levinson et al. in view of NPL-1 further in view of Ebersole et al. and NPL-3 by adding the comparing based on detected differences as taught by NPL-3 with a reasonable expectation of success. The motivation to modify method of Levinson et al. in view of NPL-1 further in view of Ebersole et al. and NPL-3 is to provide reliable system for detecting failures and anomalies for safety-critical applications of semantic segmentation (see NPL-3 at Abstract).
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Levinson et al. in view of NPL-1 further in view of Ebersole et al., NPL-3, and NPL-2 (Haldimann, David, et al. “This is not what I imagined: Error Detection for Semantic Segmentation through Visual Dissimilarity.” arXiv preprint arXiv:1909.00676 (2019)).
Regarding claim 15, Levinson et al. in view of NPL-1 further in view of Ebersole et al. and NPL-3 teaches the method according to claim 14.
However, Levinson et al. in view of NPL-1 further in view of Ebersole et al. and NPL-3 does not explicitly state wherein: the reconstruction model comprises generative adversarial networks.
In the same field of endeavor, NPL-2 teaches wherein: the reconstruction model comprises generative adversarial networks (NPL-2 at pg. 1, right column: “we utilize generative models such as the conditional generative adversarial network (cGAN) [11] to generate synthetic images based on the output of a semantic segmentation network”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Levinson et al. in view of NPL-1 further in view of Ebersole et al. and NPL-3 by adding the generative adversarial networks as taught by NPL-2 with a reasonable expectation of success. The motivation to modify method of Levinson et al. in view of NPL-1 further in view of Ebersole et al., NPL-3, and NPL-2 is to provide improved applicability of semantic segmentation to autonomous or safety critical systems (see NPL-2 at Abstract).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and can be found in the attached PTO-892 form.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JISUN CHOI whose telephone number is (571)270-0710. The examiner can normally be reached Mon-Fri, 9:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott Browne can be reached on (571)270-0151. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JISUN CHOI/Examiner, Art Unit 3666
/SCOTT A BROWNE/Supervisory Patent Examiner, Art Unit 3666