Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 11 is objected to because of the following informalities:
Claim 11 references claim 1, but does not properly depend from claim 1 because the instructions can exist without performance of any of the method steps. Here, claim 1 is a method but claim 11 is an apparatus, and the apparatus claim can be met without necessarily practicing the method. MPEP 608.01(n)(III) addresses the “test for proper dependency.”
MPEP 607(III) states:
Any claim which is in dependent form but which is so worded that it, in fact, is not a proper dependent claim, as for example it does not include every limitation of the claim on which it depends, will be required to be canceled as not being a proper dependent claim; and cancellation of any further claim depending on such a dependent claim will be similarly required. The applicant may thereupon amend the claims to place them in proper dependent form, or may redraft them as independent claims, upon payment of any necessary additional fee.
Therefore, cancellation of claim 11 is required.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-14 (all claims) are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
All claims are rejected because none of the claims are directed to an invention that provides the benefits articulated at specification [0006]. In particular, the specification teaches few shot learning, see, e.g., [0061], but even if few shot learning is successful, it only provides a general classification, such as determining whether an object is a traffic sign or an animal. Specification, [0040]. This is insufficient for the “comprehensive understanding” that the specification deems “imperative.” Specification, [0005]. Because this specification does not disclose technology that provides the asserted benefits to automated driving (e.g., specification, [0002]), it lacks written description support.
Claims 1 and 12 recite “obtain[ing] a finite sub-set of annotated images being representative of at least the target object class,” but this is unlimited functional claiming because the claim has not specified how the system knows which object class is the target class (rather, the claim suggests that these annotated images are used to determine the target class). Because the claim has not specified how the target class is known, the claim covers more than the specification has disclosed. MPEP 2173.05(g). Further, even if the target class were known, the claim has not specified how the annotated images are known to be representative (as opposed to, for example, specifying that the relationship is previously stored). The disclosure at specification, [0065] is insufficient to overcome this rejection because it lacks sufficient detail.
Claim 5 recites “obtaining a set of selected one or more labelled images … from a plurality of vehicles travelling on the road,” but this is unlimited functional claiming because it is not limited to other vehicles that also perform the claimed network in the same network as the ego vehicle.
Claim 6 recites “forming a training data set … based on … .” This is unlimited functional claiming because of the wide range of possible data sets and methods for forming. MPEP 2173.05(g). Specifying which data is expected to overcome this rejection.
Claim 7 recites “updating,” but this is unlimited functional claiming because there are insufficient limitations on what the update is or how it is performed.
Claim 9 recites “wherein the vehicle comprises an Automated Driving System (ADS).” Specification, [0005] specifically identifies SAE level 5 as being included, but SAE level 5 was not within the level of one of ordinary skill in the art as of the priority date. This is unlimited functional claiming. MPEP 2173.05(g). Applicant may wish to submit evidence to demonstrate claimed level of autonomous driving.
Dependent claims are likewise rejected. Claim 11 is rejected as per claim 1.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-14 (all claims) are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1 and 12 recite a “sub-set of annotated images being representative of at least the target object class.” This is indefinite because a “sub-set” is defined with respect to the entire set, but the claims do not define what that set is, and thus it is unclear whether or not a given group of images is a “sub-set” or not.
Claims 1 and 12 recite “representative,” but this is subjective because different people can have different opinions as to what is representative. MPEP 2173.05(b)(IV).
Claims 1 and 12 recite “threshold value number,” but this is also subjective because the threshold is unspecified. MPEP 2173.05(b)(IV). Providing an objective definition of the threshold is expected to overcome this rejection. Additionally, the word “value” appears to be superfluous.
Claims 1 and 12 recite “preliminary association,” but it is unclear how to interpret “preliminary because the claims do not require finalizing the association (e.g., as per below, the claims recite “for generating,” which may be intended use). In other words, it is not clear how to assess whether a given association is “preliminary” or not.
Claims 1 and 12 recite “when the preliminary association is determined: … .” Here, the word “when” is understood as requiring the recited steps to happen at the same time as the determination, but this requirement that they both occur at the same time (i.e., that one happens “when” the other does) is relative, but specification provides insufficient guidance. MPEP 2173.05(b).
Claims 1 and 12 recite “for the one or more images,” but this is subjective because different people can have different opinions about how something should or will be used, i.e., whether a given image label is for one image, certain images, all of the images or none of the images. MPEP 2173.05(b)(IV).
Claims 1 and 12 recite “in order to,” but it is not clear if this is a required step or merely intended use. If the claim language is an intended use, it is unclear what is required of the label to be capable of performing the intended use. Relatedly, claims 1 and 12 later recite “the one or more labelled images,” which requires this step to occur to provide antecedent basis. MPEP 2173.05(e).
Claims 1 and 12 recite “indicating,” but this is subjective. MPEP 2173.05(b)(IV).
Claims 1 and 12 recite “for generating,” but it is not clear if this is a required step or merely intended use. If the claim language is an intended use, it is unclear what is required of the selecting to be capable of performing the intended use. Note that claim 6 recites “the generated.”
Claims 1 and 12 recite “corresponding,” but this is subjective because different people can have different opinions regarding correspondence. MPEP 2173.05(b)(IV). Removing the word “corresponding” is expected to overcome this rejection.
Claims 1 and 12 recite “identification annotations.” The term “identify” is used with different meanings in the specification. For example, [0041] states “detected and identified by vehicle sensors (radar, LIDAR, cameras, etc.),” but it is not clear what a radar or lidar could do beyond detecting the object, suggesting that “identifying” means something along the lines of determining a shape or presence. [0046] and [0047] use “identify” to mean determining which class something belongs to. [0064] states that the annotations may be done manually, which suggests that the annotations are specific identifications. [0100] provides examples of identifications, but these examples do not distinguish between classifying and specifically identifying.
It is also unclear how the word “annotation” changes the scope of “identification.” It appears that the intent is that the identification is annotated, but the broadest reasonable interpretation in light of the specification includes an annotation that is useful for identifying. See, e.g., [0048] “a few annotated images of a traffic object which are considered to be a representative of at least the target object class,” suggesting that an annotation is not a specific identification.
Claims 1 and 12 recite both “label” and “annotate.” These terms are often synonymous in image recognition, such that the difference between these terms is unclear.
Claim 2 recites “road-side traffic object comprising a traffic sign or a traffic signal.” MPEP 2173.05(h)(I) “If a Markush grouping requires a material selected from an open list of alternatives (e.g., selected from the group “comprising” or “consisting essentially of” the recited alternatives), the claim should generally be rejected under 35 U.S.C. 112(b) as indefinite because it is unclear what other alternatives are intended to be encompassed by the claim.” Here, it is unclear what a road-side traffic object could be other than a traffic sign or signal. One way to overcome this rejection is to replace the word “comprising” with “consisting of.”
Claims 3 and 13 recite “for a subsequent transmission to the remote server for generating .. .” Here, each instance of “for” is indefinite because it is not clear whether this is a required step or an intended use, and if it is an intended use, what is required to meet the claim limitation. Note that claim 6 recites “the generated.”
Claim 5 recites “produced and selected by each vehicle of the plurality of vehicles.” This lacks sufficient antecedent basis because the other vehicles are not recited as having performed the method of claim 1. MPEP 2173.05(e).
Claim 6 recites “for forming,” but it is not clear whether this is a required step or if it is intended use, and if it is intended use, what would be required.
Claim 7 recites “being updated with,” but it is unclear if this requires that the updating be presently occurring or, instead, that it happened already.
Claim 14 recites “localization system,” but this is new terminology. MPEP 2173.05(a). One way to overcome this rejection is to recite specific technology, such as GPS. Specification, [0095].
Dependent claims are likewise rejected. Claim 11 is rejected as per claim 1.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-14 (all claims) are rejected under 35 U.S.C. 101 because the claimed invention lacks patentable utility. MPEP 2107.01(I) explains:
Practical utility is a shorthand way of attributing “real-world” value to claimed subject matter. In other words, one skilled in the art can use a claimed discovery in a manner which provides some immediate benefit to the public.
Nelson v. Bowler, 626 F.2d 853, 856, 206 USPQ 881, 883 (CCPA 1980).
However, the utility asserted in the present application does not provide an immediate benefit to the public, but is instead an avenue for further research. MPEP 2107.01(B) discusses In re Fisher, 421 F. 3d 1365 (Fed. Cir. 2005)
The claims at issue in Fisher were directed to expressed sequence tags (ESTs), which are short nucleotide sequences that can be used to discover what genes and downstream proteins are expressed in a cell. The court held that “the claimed ESTs can be used only to gain further information about the underlying genes and the proteins encoded for by those genes. The claimed ESTs themselves are not an end of [the inventor’s] research effort, but only tools to be used along the way in the search for a practical utility…. [Applicant] does not identify the function for the underlying protein-encoding genes. Absent such identification, we hold that the claimed ESTs have not been researched and understood to the point of providing an immediate, well-defined, real world benefit to the public meriting the grant of a patent.” Id. at 1376, 76 USPQ2d at 1233-34). Thus a “substantial utility” defines a “real world” use.
The portions of the present specification that are directed to utility are:
[0002]The disclosed technology relates to methods and systems for determining an association of at least one object to a target object class, wherein the at least one object is present in a surrounding environment of a vehicle travelling on a road. In particular, but not exclusively the disclosed technology relates to recognition and classification of objects being present in the surrounding environment of the vehicle.
[0006]There is thus a pressing need in the art for novel and improved solutions for classification of various objects on roads with high accuracy and speed and without the need for extensive data collection.
[0015]The present inventors have accordingly realized that by using a data-driven approach according to the presented method and systems herein scalability, speed and reproducibility can be achieved in classification of objects such as roadside objects including traffic objects without the stringent requirements on massive data collection. The data-driven approach of the present disclosure provides a flexible, cost-efficient, and rapid approach for generating training data for training neural networks and ML algorithms, specifically for objects for which many samples of real-world data are neither collected nor available. This also greatly contributes to solving the problem of identification of rare objects in scenarios involving multiple environmental variables or conditions happening simultaneously or outside the conventional levels.
[0058]The present inventors have realized that by using a data-driven approach comprising the use of FSL models scalability, speed and reproducibility can be achieved in classification of objects such as roadside traffic objects without the stringent requirements on massive data collection. The data-driven approach of the present disclosure provides a flexible, cost-efficient, and rapid approach for generating training data for training neural networks and ML algorithms, specifically for objects for which many samples of real-world data are neither collected nor available. This also greatly contributes to solving the problem of identification of rare objects in scenarios involving multiple environmental variables or conditions happening simultaneously or outside the conventional levels.
The examiner finds that classification of an object without a precise identification (e.g., knowing that an object is a traffic sign, but not which traffic sign) is not a real world benefit because an autonomous vehicle does not have enough information to act on. The examiner also finds that the alleged advances disclosed in this application are not an immediate benefit, but rather an opportunity for further study, akin to the sequence tags from Fisher. In particular, the specification merely states that this technology generates training data, but does not show what the training data is used for. While the specification identifies a lack of rare objects, the disclosed technology does not address identifying rare objects. Specification:
[0057]The training data is usually obtained through driving the ego vehicle 1, or a plurality of vehicles comprised in a fleet of vehicles, or dedicated test vehicles on various types of roads under a variety of environmental conditions and for suitable periods of time to collect and evaluate large data sets of detected objects on the road. However, the very large variety of objects being present on the road and in the surrounding environment of the vehicle may render the task of data collection and formation of corresponding training data sets practically unattainable. This is even more relevant for scenarios when the object is a rare object such a traffic object e.g. a traffic sign or signal, which has newly been introduced into traffic and for which no adequate amount of data is available yet. Another example may be rare species of animals, which are not completely classified and may drastically vary based on their geographical habitats.
Here, the present technology is not directed to determining the rarity of the object detected, rather it simply identifies the object as a traffic sign or an animal, but this not a benefit for identifying rare objects because there is no guidance produced as to whether there is a rare object, and if so, where it would be categorized.
Here, the present technology allegedly produces classifications of objects on the road, but does not identify why one would want training data that has classified objects as, for example, traffic signs versus animals. Rather, the application has alleged that there is a desire for training date of rare objects (and the examiner finds this persuasive), but fails to teach how rare objects would be distinguished from common objects.
Additionally, even if the above shortcomings were addressed, the Majee reference submitted by Applicant (titled “Few-Shot Learning for Road Object Detection”) closes with “We also observe that class-confusions remains [sic] an open challenge in any few-shot learning paradigm and can be the focus of further improvements.” Comparing the technology discussed in Majee and the present specification does not identify any technology taught in the present specification that overcomes the shortcomings identified in Majee. In other words, even if there were a real world benefit to images objects on or near roads classified with few shot learning, this technology does not resolve the known problem of class confusion.
Therefore, the alleged utility of this invention is not sufficiently specific and substantial.
MPEP 2107.01(I) states:
Practical considerations require the Office to rely on the inventor’s understanding of the invention in determining whether and in what regard an invention is believed to be “useful.” Because of this, Office personnel should focus on and be receptive to assertions made by the applicant that an invention is “useful” for a particular reason.
One way to overcome this rejection may to submit evidence (such as a declaration) showing the real world benefit of the disclosed technology.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-7 and 9-14 (all claims except for 8) are rejected under 35 U.S.C. 102(a)(1) and/or (a)(2) as anticipated by US20190318207A1 (“Lo”).
1. A method for determining an association of at least one object to a target object class, the at least one object being present in a surrounding environment of a vehicle travelling on a road, (Lo, [0016] “an image classification system of an autonomous or semi-autonomous vehicle”)
the method comprising:
obtaining sensor data from a sensor system of the vehicle, the sensor data comprising one or more images, captured by a vehicle-mounted camera, of the surrounding environment of the vehicle; (Lo, [0046] “For example, the input sensor data 155 can include images captured by a camera of an environment that surrounds a vehicle.” See also Fig. 3C. Fig. 3C shows a stop sign.)
determining a presence of the at least one object in the surrounding environment of the vehicle based on the obtained sensor data; (Lo, [0071] “For instance, the common instance classifier 310 processes the input image 302 to generate a common instance output 304A.”)
obtaining a finite sub-set of annotated images being representative of at least the target object class, the finite sub-set comprising a number of annotated images smaller than a threshold value number; (Lo, [0062] “Each of the training examples 123A and 123B includes images of reference objects as well as one or more labels for each image.” Lo’s labels teach the claimed annotations, and the claimed threshold is arbitrarily chosen to be larger than the number of Lo’s images.)
determining, based on the obtained sensor data comprising one or more images of the at least one object and the obtained finite sub-set of annotated images, a preliminary association between the at least one object and the target object class; and (Lo, Fig. 3A, object score 305)
when the preliminary association is determined:
producing an image label for the one or more images of the at least one object in order to obtain one or more labelled images indicating the preliminary association of the at least one object with the target object class; and (Lo, Fig. 3A, 305 “object category”)
selecting the one or more labelled images for generating a corresponding object identification annotation for the one or more labelled images. (Lo, Fig. 3C. One difference between Figs. 3A and 3C is that 3C has a correct classification.)
2. The method according to claim 1, wherein the at least one object comprises at least one road-side traffic object comprising a traffic sign or a traffic signal. (Lo, Fig. 3C, 302C, stop sign)
3. The method according to claim 1, wherein the method further comprises:
transmitting the selected one or more labelled images of the at least one object to a remote server and/or (Lo, [0104] “The features can be implemented in a computer system that includes a back-end component, such as a data server … .”)
storing the selected one or more labelled images in a memory of the vehicle for a subsequent transmission to the remote server for generating the corresponding object identification annotation based at least on the transmitted one or more labelled images of the at least one object from the vehicle. (lo, [0102] “Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory … .” The claimed “for a subsequent transmission and “for generating” are both interpreted as intended use. Here, Lo’s system is capable of being used for these intended uses.)
4. The method according to claim 1, wherein the method further comprises: storing the selected one or more labelled images in a memory of the vehicle; and (Lo, Fig. 1, On-Board Classifier Subsystem 134, see also [0049] “… or, in the case of an executing software module, stored within the same memory device.” Lo’s use of “on-board” teaches the claimed memory of the vehicle. Additionally, [0049] is describing the on-board classifier subsystem 134.)
generating the corresponding object identification annotation for the one or more labelled images of the at least one object. (Lo, Fig. 2 and [0069] “classifying the input image in accordance with the determined weight (250)”. Lo’s classifying teaches the claimed annotation.)
5. The method according to claim 4, wherein the method further comprises:
obtaining a set of selected one or more labelled images of the at least one object from a remote server and/or from a plurality of vehicles travelling on the road, (Lo, [0062] “Each of the training examples 123A and 123B includes images of reference objects as well as one or more labels for each image.” Lo, Fig. 1 shows the training examples 123A and 123B as housed in data center 112.)
wherein the set of selected one or more labelled images of the at least one object comprises the one or more labelled images of the at least one object produced and selected by each vehicle of the plurality of vehicles; (Lo, [0054] “The on-board system 130 can provide the training data 123 to the training system 110 in offline batches or in an online fashion, e.g., continually whenever it is generated.” Lo’s training data teaches the claimed labelled images.)
generating the corresponding object identification annotation based on the obtained set of the selected one or more labelled images of the at least one object. (Lo, Fig. 1)
6. The method according to claim 4, wherein the method further comprises:
forming a training data set for a machine learning, ML, algorithm configured for identification of the at least one object based on the generated object identification annotation; or (Lo, Fig. 1, training system 110)
transmitting the generated object identification annotation to a remote server for forming the training data set for the machine learning, ML, algorithm. (Lo, Fig. 1, training data 123)
7. The method according to claim 3, wherein the method further comprises:
obtaining, from the remote server, an updated finite sub-set of annotated images being updated with the generated object identification annotation for the one or more labelled images of the at least one object; or (Lo, Fig. 1, training classifier subsystem 114)
updating the finite sub-set of annotated images with the generated object identification annotation for the one or more labelled images of the at least one object. (Lo, Fig. 1, training classifier subsystem 114)
9. The method according to claim 1, wherein the vehicle comprises an Automated Driving System (ADS). (Lo, abstract, “an image classification system of an autonomous or semi-autonomous vehicle”)
10. The method according to claim 1, wherein the method is performed by a processing circuitry of the vehicle. (Lo, Fig. 1, vehicle 122 and [0101] “The features described can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them.”)
Claim 11 is rejected as per claim 1. See also, Lo, claim 15 “One or more non-transitory computer-readable storage media encoded with computer program instructions … .”
Claims 12 and 13 are rejected as per the corresponding method claims. See also, Lo, claim 8, “A system comprising … .”
14. A vehicle comprising: one or more vehicle-mounted sensors configured to monitor a surrounding environment of the vehicle; (Lo, Fig. 1, sensor subsystems 132)
a localization system configured to monitor a geographical position of the vehicle; and (Lo, [0043] “The sensor subsystems include a combination of components that receive reflections of electromagnetic radiation, e.g., LIDAR systems that detect reflections of laser light” Lo’s lidar teaches the claimed monitoring the position of the vehicle, see e.g., [0042] “For example, the vehicle 122 can autonomously apply the brakes if a full-object prediction indicates that a human driver is about to collide with a detected object” because avoiding crashing teaches the claimed monitoring position.)
a system according to claim 12. (See the mapping of claim 12.)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over US20190318207A1 (“Lo”) and US20220189143A1 (“Xie”).
8. Lo teaches the method according to claim 1, but is not relied on for the below claim language.
However, Xie teaches wherein the method further comprises determining the preliminary association between the at least one object and the target object class by means of a few shot learning model. (Xie, [0004] “Few-shot learning aims at learning to recognize visual categories using only a few labelled exemplars from each category.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Xie to the teachings of Lo such that Lo can also classify with few shot learning for the purpose of the advantages from Xie’s background (i.e., Xie, [0003]-[0006]).
Based on the above, this is an example of “combining prior art elements according to known methods to yield predictable results.” MPEP 2143.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US11953340B2 – titled “Updating road navigation model using non-semantic road feature points”
US12354366B2 – claim 1 “analyze one or more pixels of the at least one image to determine whether the one or more pixels represent at least a portion of a target vehicle, and for pixels determined to represent at least a portion of the target vehicle, determine one or more estimated distance values from the one or more pixels to at least one edge of a face of the target vehicle”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID ORANGE whose telephone number is (571)270-1799. The examiner can normally be reached Mon-Fri, 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID ORANGE/ Primary Examiner, Art Unit 2663