Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant should show support in the original disclosure for the new or amended claims. See, e.g., Hyatt v. Dudas, 492 F.3d 1365, 1370, n.4, 83 USPQ2d 1373, 1376, n.4 (Fed. Cir. 2007) and MPEP § 2163.04. The support for the limitations is not apparent, and applicant has not pointed out where the limitation is supported; see also MPEP §§ 714.02 and 2163.06.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-2, 8-10, 16-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
The claim 1 steps of:
“based at least on the one or more pixels being associated with the compromised visibility, filtering out at least one pixel of the one or more pixels to determine second image data corresponding to the image”,
lacks a corresponding disclosure sufficient for definiteness under 35 USC 1125, as well as evidence of a disclosed “species” of the claimed genus sufficient for support under 35 USC 112a.
Applicant's represented in an interview conducted on 7/14/2025 referred to par. [0038] of PG PUS 20230012645 A1). However both that paragraph and original dependent claim 2 that depends from claim1 and disclosing how the filtering out is performed, are teaching filtering compromised visibility pixels based on the importance and usability of the pixels for performing one or more operations with respect to semi-autonomous or autonomous driving, not on the pixels being associated with the compromised visibility. There is no disclosure of any particular steps or algorithms of how the “filtering out at least one pixel of the one or more pixels to determine second image data corresponding to the image, based at least on the one or more pixels being associated with the compromised visibility” is performed. There is no supporting particular steps or algorithms that would indicate that applicant was in possession of a disclosed species. There is no disclosed particular steps or algorithms for “based at least on the one or more pixels being associated with the compromised visibility, filtering out at least one pixel of the one or more pixels to determine second image data corresponding to the image” anywhere in the disclosure. Nothing in the specification disclosure explains how the at least one pixel of the one or more pixels is filtered out based at least on the one or more pixels being associated with the compromised visibility to determine second image data corresponding to the image. Therefore, claim 1 is indefinite under 35 USC 112b, as well as lacks sufficiency of a supporting disclosure under 35 USC 112a, written description.
As to claims 2, 8-10, 16-20 refer to claim 1 rejection..
The claim 17 steps of:
“a system for performing conversational Al operations”,
lacks a corresponding disclosure sufficient for definiteness under 35 USC 1125, as well as evidence of a disclosed “species” of the claimed genus sufficient for support under 35 USC 112a.
The specification refers to neural network (e.g., paragraphs of PG Publication US 20230012645 A1) as performing this step, but lacks any algorithm describing how this step is performed by the neural network, and most importantly, an algorithm of how the neural network is trained to perform this step. The specification essentially discloses “In-cabin monitoring camera sensor is preferably monitored by a neural network running on another instance of the Advanced SoC, configured to identify in cabin events and respond accordingly. An in-cabin system may perform lip reading to activate cellular service and place a phone call”, while “a system for performing conversational Al operations” as known in the art is a “Conversational artificial intelligence (AI) refers to technologies that users can talk to. Conversational AI combines natural language processing (NLP) with machine learning to imitate human interactions, recognizing speech and text inputs and translating their meanings across various languages. There is no disclosed algorithm for “a system for performing conversational Al operations”. Nothing in the specification disclosure explains how the neural network performing “conversational Al operations”, or is trained how to do so. Therefore, claim 17 is indefinite under 35 USC 112b, as well as lacks sufficiency of a supporting disclosure under 35 USC 112a, written description.
As to claim 20 refer to claim 17 rejection.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims are rejected under 35 U.S.C. 101 because the claimed invention is directed to non- statutory subject matter (an abstract idea without significantly more). The claim(s) recite(s) a method, for performing one or more operations associated with a machine).
Step 1:
With regard to step (1), claim 1, is directed to a method, i.e. to one of statutory categories of invention.
Step 2A-1:
With regard to 2A-1, the limitation of “determining, using one or more neural networks and based at least on first image data corresponding to an image, one or more pixels of the image associated with compromised visibility”, as drafted, is a process that, under its broadest reasonable interpretation, and given the field of endeavor, a neural network is a mere mathematical algorithm implemented as a software on a generic computer, can practically be performed as mental process. That is, noting in the claim elements preclude the step from reasonably and practically being performed as a mental process in the human mind (MPEP 2106: “MENTAL PROCESS”: the “mental process” abstract idea grouping is defined as concepts performed in the mind, and examples of mental processes include observations, evaluations, judgments, and opinions). The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation. See, e.g., Benson, 409 U.S. at 67. 65. 175 USPQ at 674-75. 674 (noting that the claimed “conversion of [binary coded-decimal] numerals to pure binary numerals can be done mentally , i.e., “as a person would do it by head and hand,”). Nor do the Courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer. As the Federal Circuit has explained, “[c]ourts have examined claims that required the use of a computer and found that the underlying, patent-ineligible invention could be performed via pen and paper or in person’s mind.” Versata Dev. Group v. SAP Am., Inc. 793 F.3d 1306, 1336, 116 USPQ2d 1681. 1702 (Fed Cir. 2015). See also Intellectual Ventures | LLC v. Symantec Corp., 838 F 3d 1307, 1318 120.). Similarly, the limitation of “based at least on the one or more pixels being associated with the compromised visibility, filtering out at least one pixel of the one or more pixels to determine second image data corresponding to the image”, as drafted, is a process that, under its broadest reasonable interpretation, is directed to a mathematical algorithm derived by mathematical calculations from mathematical relationships. Similarly, the limitation of “”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind. That is, noting in the claim elements preclude the step from reasonably and practically being performed as a mental process in the human mind (MPEP 2106: “MENTAL PROCESS”: performed in the mind). For example, in the context of this claim encompasses an operator (user) performs a machine operation based on the filtered image data (second image data).
If a claim limitation, under its broadest reasonably interpretation is directed to extra solution activity for data gathering, or mathematical relationships and calculations (Mathematical Algorithm) then it falls within the grouping of the abstract idea. Accordingly, the claim recites an abstract idea.
Step 2A-2:
The 2019 PEG defines the phrase "integration into a practical application" to require an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception. In the instant case, the additional elements in the claim does not apply, rely on, or use the judicial exception.
This judicial exception is not integrated into a practical application because the claim recites the additional elements- one or more neural networks to perform the recited steps. The one or more neural networks is not a particular machine “MPEP 2106.05(b)”. The neural network in all steps is recited at high-level of generality (i.e., as a generic computing device performing a generic computer function), such that it amounts no more than mere instructions to apply the exception using generic computer components as a tool to perform the abstract idea “MPEP 2106.05(f)” . The abstract idea does not improve the functioning of the neural network “MPEP 2106.05(a)”, but only use the neural network to perform the abstract idea. The claim generally linking the use of the judicial exception to a particular technological environment (analysis of thermal images ) “MPEP 2106.05(h)”. Accordingly, the additional elements do not integrate the abstract idea into a practical application because it is a field-of- use limitation that does not impose any meaningful limits on practicing the abstract idea beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as whole is more than a drafting effort designated to monopolize the exception “MPEP 2106.05(e) and Vanda Memo”. The claim is not integrated into a practical application. The claim recites an abstract idea.
Step 2B
Because the claim fails under (2A), the claim is further evaluated under (2B). The claim herein does not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to integration of the abstract idea into practical application, the additional elements of one or more neural networks taken separately or in combination perform conventional computer functions or a mere mathematical algorithm implemented as software on a generic computer that amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claim 9 is a system analogous to method claim 1, grounds of rejection analogous to those applied to claim 1 are applicable to claim 9. Furthermore, in the instant case, in step 2A- 2, the judicial exception is not integrated into a practical application because the claim only recites the additional elements- one or more neural networks and one or more processors. The one or more neural networks and one or more processors are not particular machines “MPEP 2106.05(b)”. The neural network in all steps is recited at high-level of generality (i.e., as a generic computing device performing a generic computer function), such that it amounts no more than mere instructions to apply the exception using generic computer components as a tool to perform the abstract idea “MPEP 2106.05(f)” The abstract idea does not improve the functioning of the one or more neural networks and one or more processors “MPEP 2106.05(a)”, but only use the one or more neural networks and the one or more processors to perform the abstract idea. The claim generally linking the use of the judicial exception to a particular technological environment (operating a machine ) “MPEP 2106.05(h)”. Accordingly, the additional element do not integrate the abstract idea into a practical application because it is a field-of-use limitation that does not impose any meaningful limits on practicing the abstract idea. According to step 2A-2 the claim recites an abstract idea.
Because the claim fails under (2A-2), the claim is further evaluated under (2B). The claim herein does not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to integration of the abstract idea into practical application, the additional elements of the one or more neural networks and the one or more processors, amount to no more than mere instructions to apply the exception using a generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claim 18 is a processor analogous to method claim 1, grounds of rejection analogous to those applied to claim 1 are applicable to claim 18. Furthermore, in the instant case, in step 2A- 2, the judicial exception is not integrated into a practical application because the claim only recites the additional elements- processing circuitry. The processing circuitry is not particular machines “MPEP 2106.05(b)”. The processing circuitry in all steps is recited at high-level of generality (i.e., as a generic computing device performing a generic computer function), such that it amounts no more than mere instructions to apply the exception using generic computer components as a tool to perform the abstract idea “MPEP 2106.05(f)” The abstract idea does not improve the functioning of the processing circuitry “MPEP 2106.05(a)”, but only use the processing circuitry to perform the abstract idea. The claim generally linking the use of the judicial exception to a particular technological environment (operating a machine ) “MPEP 2106.05(h)”. Accordingly, the additional element do not integrate the abstract idea into a practical application because it is a field-of-use limitation that does not impose any meaningful limits on practicing the abstract idea. According to step 2A-2 the claim recites an abstract idea.
Because the claim fails under (2A-2), the claim is further evaluated under (2B). The claim herein does not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to integration of the abstract idea into practical application, the additional elements of processing circuitry, amount to no more than mere instructions to apply the exception using a generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Dependent claim 2 recites, determining a value representative of a usability of the image to perform the one or more operations associated with the machine, wherein the filtering out of the at last one pixel of the one or more pixels is further based at least on the value.
This claim element limits the previous mathematical calculation and relationships of “filter at least one pixel”, and the claim element remains a step that is directed to mathematical calculations derived from mathematical relationships, and the claim remains not integrated into a practical application, and does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As to claim 10 refer to claim 2.
Dependent claim 19 recites, “wherein the one or more processing units are further to determine, using one or more neural networks and based at least on initial image data corresponding to the image, the one or more pixels that are associated with the compromised visibility, wherein the initial image data corresponds to the filtered image data prior to the filtering”,
This claim limitation adds an abstract idea in the form of “determining using a neural network the pixels associated with the compromised visibility”, which can practically be performed as mental process. and the claim remains not integrated into a practical application, and does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Dependent claim 8 recites, wherein the performing the one or more operations associated with the machine comprises: processing the second image data using one or more systems associated with the machine; determining, using the one or more systems and based at least on the second image data, the one or more operations associated with the machine; and performing the one or more operations associated with the machine.
This claim element limits the previous mental step of “performing one or more operations associated with a machine”, and the claim element remains as a step that can practically be performed as mental process, and the claim remains not integrated into a practical application, and does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As to claim 16 refer to claim 8.
Dependent claim 17recites, “wherein the system is comprised in at least one of …..”.
The claim limitation adds insignificant extra-solution activity (MPEP 2106.05(g)), and the claim remains not integrated into a practical application, and does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As to claim 20 refer to claim 17.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 8-10, 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Machot, F. et al., “Real-time raindrop detection based on cellular neural networks for ADAS”, Journal of Real-Time Image Processing, March 26, 2016, Pages 14; Wu, Qi et al. , “RAINDROP DETECTION AND REMOVAL USING SALIENT VISUAL FEATURES”, In 2012 19th IEEE International Conference on Image Processing, pp. 941-944. IEEE, 2012 and Zhang et al. (US 8605947 B2).
Claims are rejected as best understood by the Examiner based on the Disclosure.
As to claim 1, Machot discloses a method comprising:
determining, using one or more neural networks and based at least on first image data corresponding to an image, one or more pixels of the image associated with compromised visibility [One of the most common interfering effects leading to a falsified or even inoperative system are raindrops on a vehicle’s windshield which occur during rainy or snowy conditions (compromised visibility). These adherent raindrops occlude and deform some image areas. For example, raindrops will decrease the performance of clear path detection by adding blurred areas to the image (1. Introduction). A modified CNN using SVM is used to extract features that can distinguish between pixels of different classes. Therefore, we believe that in any machine vision scenario where possible features, e.g., pixel intensity, edge strength or color based features (image data), our proposed approach can be used to classify these classes in real-time. Thus, pixels features in a new image (first image data) can be compared with the distribution of features in the training set and classified if they belong to a certain class. The features represent a class (here: 1 represents a non-raindrop, þ1 represents a raindrop) (section 5 and The original input image Fig. 3)].
Machot determining raindrops in all regions (road, sky, horizon, etc.) of the original input image (see Fig. 3) (one or more pixels of the image associated with compromised visibility) using a neural network. Machot does not disclose, based at least on the one or more pixels being associated with the compromised visibility, filtering out at least one pixel of the one or more pixels to determine second image data corresponding to the image; and performing, based at least on the second image data, one or more operations associated with a machine.
Wu discloses a method for limiting raindrop detection to a region of interest (ROI), below the horizon for the clear path of the road region that is associated with in-vehicle vision using a saliency map generated by Adaboost learning to label the raindrop and non-rain-drop regions in the ROI. Therefore, combining color, texture and shape saliency features (features of compromised visibility pixels), we generate a raindrop saliency map to locate the raindrop candidates in the ROI (clear path of the road). The method reduces the number of false alarms (i.e., regions mis-detected as raindrops) (sections 1 and 2, Fig. 1). The raindrops are removed (Section 4). As shown in Fig. 1, (A) Original images with ROI below horizon (yellow rectangle); (B) Raindrop candidates detected only for the clear path of road region below horizon with the raindrops above the horizon removed (filtered); (C) Raindrop detection below horizon for clear path(Raindrop: red region); (D) image with raindrop removal (second image data). , i.e., the saliency map excluded (filtered) the raindrops above the horizon and limits the raindrop detection to a small locally salient droplets for clear path of the road region (below horizon) and the raindrops is removed to generate second image data.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention was made to use the teachings of WU to modify the method of Machot by filtering out at least one pixel of the one or more pixels to determine second image data corresponding to the image based at least on the one or more pixels being associated with the compromised visibility in order to reduce the number of false alarms (i.e., regions mis-detected as raindrops) (Section 1) and reduce the computation time and save processing resources by reducing the quantity of processed data.
Wu does not disclose performing, based at least on the second image data, one or more operations associated with a machine.
Zhang discloses method to determine a clear path for autonomous or semi-autonomous driving in accordance with the disclosure. Image 10 is depicted including ground 20, horizon 30, and objects 40. Image 10 is collected by camera 110 and represents the road environment in front of vehicle 100. Ground 20 represents the zone of all available paths open to travel without regard to any potential objects. The method of FIG. 3 that determines a clear path upon ground 20 starts by presuming all of ground 20 is clear, and then utilizes available data to disqualify portions of ground 20 as not clear. The method of FIG. 3 instead analyzes ground 20 and seeks to define a clear path confidence likelihood from available data that some detectable anomaly which may represent object 40 limits or makes not clear that portion of ground 20. This focus upon ground 20 instead of objects 40 avoids the complex computational tasks associated with managing the detection of the objects. Individual classification and tracking of individual objects is unnecessary, as individual objects 40 are simply grouped together as a part of the overall uniform limitation upon ground 20. Ground 20, described above as all paths open to travel without discrimination, minus limits placed on ground 20 by areas found to be not clear, define clear path 50, depicted in FIG. 3 as the area within the dotted lines, or an area with some threshold confidence likelihood of being open for travel of vehicle 100 (Col. 5, lines 32-57 ). Object 40 that creates not clear limitations upon ground 20 can take many forms. For example, an object 40 can represent a discrete object such as a parked car, a pedestrian, or a road obstacle, or object 40 can also represent a less discreet change to surface patterns indicating an edge to a road, such as a road-side curb, a grass line, or water covering the roadway (compromised visibility) (Col. 5,lines (58-63). During operation, the camera 110 generates an image for analysis in the processing module 120 (202). The processing module 120 identifies patches (group of pixels) in the image and selects a patch for analysis (204) (Col. 10, lines 54-57). The decision for the component patch being analyzed can be based upon pixels contained within the patch, for example, with the patch being determined to be unclear if any or some minimum number of pixels within the patch are determined to be not clear (Col. 10, lines 25-29). As described hereinabove, patch-based methods are computationally, relatively fast (col. , lines 39-41). A first exemplary filtering method removes pixels above a horizon or vanishing point, including sky and other vertical features that cannot be part of a road surface (456). The term "vanishing point" as used herein is a broad term, and is to be given its ordinary and customary meaning to one ordinarily skilled in the art, and refers to an infinite far point on the horizon that is intersected by multiple parallel lines on the ground in the view. Identifying a road surface creating a clear path on which to drive is necessarily below the vanishing point or horizon line. Filtering images to only analyze an area below the horizon line helps to clarify the pixels being analyzed to identify a road surface from irrelevant pixels (Col. 15, lines 1-12). Additional methods are herein disclosed for detecting a clear path of travel for the vehicle 100 including methods to detect objects around the vehicle and place those objects into a context of a clear path determination. Detection of the object or objects and any contextual information that can be determined can be utilized to enhance the clear path of travel with an impact of object to the clear path. Information regarding an operational environment of the vehicle, including detection of objects around the vehicle, can be used to define or enhance a clear path (col. 21, lines 1-10). A detected object such as another vehicle or a construction barrier may inhibit safe vehicle travel and/or indicate conditions or constrains limiting vehicle travel. The vehicle detection analysis detects vehicles in a field-of-view. The construction area detection analysis detects construction areas and other zones associated with limiting vehicle travel. The separate methods for analyzing a roadway are used to strengthen confidence in clear paths identified in the field-of-view. Similarly, clear path detection analysis can be used to augment the vehicle detection analysis and the construction area detection analysis (Col. 21, lines 12-22). Construction area detection analysis includes detecting construction areas and other zones associated with limiting vehicle travel. Travel may be limited in construction zones by construction objects, e.g., construction barrels, barriers, fencing and grids, mobile units, scattered debris, and workers. Travel may additionally be limited in construction zones by traveling constraints, e.g., speed zones and merger requirements. Detection of a construction area may be useful for changing vehicle control and operation in a number of ways. Upon detection of a construction area, vehicle programming may initiate or end certain autonomous or semi-autonomous vehicle controls. Vehicle operation can be limited to posted vehicle speeds and travel limited to certain areas or lanes. Detection of a construction zone may also initiate certain programming algorithms or detection schemes including, for example, a construction worker detection algorithm useful in areas where construction worker presence necessitates a speed decrease for the vehicle 100 (Col. 29, lines 9-26). As mentioned above, processing module 120 may include algorithms and mechanisms to actuate autonomous driving control by means known in the art and not described herein, or processing module 120 may simply provide information to a separate autonomous driving system. Reactions to perceived objects can vary, and include but are not limited to steering changes, throttle changes, braking responses, and warning and relinquishing control of the vehicle to the operator (col. 33, lines 34-41).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention was made to use the teachings of Zhang to modify the combined method of Machot and Wu by performing, based at least on the second image data (filtered image of the clear path), one or more operations associated with a machine (control steering changes, braking responses of an autonomous vehicle) in order to avoid the complex computational tasks associated with managing the detection of the objects (Col. 5, lines 48-49) and increase the speed of computation and save computational resources (col. , lines 39-41).
Claim 9 is a system analogous to the method claim 1, grounds of rejection analogous to those applied to claim 1 are applicable to claim 9. Machot further discloses a processor [CNN par. 5.1].
As to claim 2, both WU and Zhang further disclose, further comprising:
determining a value representative of a usability of the image to perform the one or more operations associated with the machine [In Wu, Adaboost learning technique is applied on the labeled raindrop and non-raindrop regions (value) that determines the importance of feature in the saliency map, then the raindrop candidates are represented as a binary mask (value) (Wu, par. 2.2). The region of interest (ROI) for the in-vehicle vision system is limited to clear path of the road region (Wu, section 2) i.e., the value indicates usability of the image of the clear path of the road by the vehicle]. [In Zhang, The likelihood analysis, as mentioned above, may be performed in one exemplary embodiment by application of trained classifiers to features extracted from a patch. One method analyzes the features a-priori using a training set of images. In this training stage, distinguishing features are selected from a raw feature set, the distinguishing features being defined by methods known in the art, such as Leung-and-Malik filter bank (Col. 7, lines 24-31). Information from the trained classifiers is used to classify or weight the feature as indicating a clear path or not clear path (value representative of usability of the image (Col. 7 , line 40-41)
wherein the filtering out of the at last one pixel of the one or more pixels is further based at least on the value [In Wu, limiting raindrop detection to a region of interest (ROI), below the horizon for the clear path of the road region that is associated with in-vehicle vision using a saliency map generated by Adaboost learning to label the raindrop and non-rain-drop regions in the ROI. Therefore, combining color, texture and shape saliency features (features of compromised visibility pixels), we generate a raindrop saliency map to locate the raindrop candidates in the ROI (clear path of the road) (sections 1 and 2, Fig. 1), i.e., the saliency map excluded (filtered) the raindrops above the horizon and limits the raindrop detection to a small locally salient droplets for clear path of the road region (below horizon) based on labeled raindrop and non-raindrop (usability value). [In Zhang, A first exemplary filtering method removes pixels above a horizon or vanishing point, including sky and other vertical features that cannot be part of a road surface (456). The term "vanishing point" as used herein is a broad term, and is to be given its ordinary and customary meaning to one ordinarily skilled in the art, and refers to an infinite far point on the horizon that is intersected by multiple parallel lines on the ground in the view. Identifying a road surface creating a clear path on which to drive is necessarily below the vanishing point or horizon line. Filtering images to only analyze an area below the horizon line helps to clarify the pixels being analyzed to identify a road surface from irrelevant pixels (Col. 15, lines 1-12), i.e., pixels above the horizon that cannot be part of the road and are not clear path (not important for the vehicle) (value of usability of the image) are filtered while pixels below horizon and are clear path on which to drive is kept.].
As to claim 10 refer to claim 2 rejection.
As to claim 8 Zhang further discloses, wherein the performing the one or more operations associated with the machine comprises:
processing the second image data using one or more systems associated with the machine [During vehicle operation, the processing module 120 can analyze image areas, for example exemplary image area 602 depicted in FIG. 13, for correspondence to the predetermined construction templates using template matching programming, such as the type described hereinabove. The processing module 120 may scan for general outlines of a certain template before initiating more computationally intense processing (Col. 29, lines 49-56)];
determining, using the one or more systems and based at least on the second image data ,the one or more operations associated with the machine [Construction area detection analysis includes detecting construction areas and other zones associated with limiting vehicle travel. Travel may be limited in construction zones by construction objects, e.g., construction barrels, barriers, fencing and grids, mobile units, scattered debris, and workers. Travel may additionally be limited in construction zones by traveling constraints, e.g., speed zones and merger requirements. Detection of a construction area may be useful for changing vehicle control and operation in a number of ways (Col. 29, lines 9-17 ); and
performing the one or more operations associated with the machine [Upon detection of a construction area, vehicle programming may initiate or end certain autonomous or semi-autonomous vehicle controls. Vehicle operation can be limited to posted vehicle speeds and travel limited to certain areas or lanes. Detection of a construction zone may also initiate certain programming algorithms or detection schemes including, for example, a construction worker detection algorithm useful in areas where construction worker presence necessitates a speed decrease for the vehicle 100 (Col. 27, lines 18-26).
As to claim 16 refer to claim 8 rejection.
As to claim 17 Zhang further discloses, wherein the system is comprised in at least one of:
a control system for an autonomous or semi-autonomous machine [to actuate autonomous driving control by means known in the art and not described herein, or processing module 120 may simply provide information to a separate autonomous driving system (col. 5 , lines 11-14 ). A clear path for autonomous or semi-autonomous driving in accordance with the disclosure (col. 5 , lines 33-34 )].
a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations [Vision systems provide an alternate source of sensor input for use in vehicle control systems (col. 26 , lines 23-24 )].
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wu, Qi et al. , “RAINDROP DETECTION AND REMOVAL USING SALIENT VISUAL FEATURES”, In 2012 19th IEEE International Conference on Image Processing, pp. 941-944. IEEE, 2012.
As to claim 18, Wu discloses a processor comprising:
processing circuits [a machine learning (processing circuitry) based approach to detect raindrops on windshield by analyzing the color, texture and shape characteristics of raindrops in images and to remove the detected raindrops (Section 6) ] to perform one or more operations associated with a machine [to improve the performance of automotive vision systems for automative driving applications in rainy conditions (section 1)] based at least on filtered image data corresponding to an image, the filtered image data being generated by filtering out one or more pixels of the image that are associated with compromised visibility [a method for limiting raindrop detection to a region of interest (ROI), below the horizon for the clear path of the road region that is associated with in-vehicle vision (filtered image data) using a saliency map generated by Adaboost learning to label the raindrop and non-rain-drop regions in the ROI. Therefore, combining color, texture and shape saliency features (features of compromised visibility pixels), we generate a raindrop saliency map to locate the raindrop candidates in the ROI for clear path of the road (filtered image) ((sections 1 and 2, Fig. 1). As shown in Fig. 1, (A) Original images with ROI below horizon (yellow rectangle); (B) Raindrop candidates detected only for the clear path of road region below horizon with the raindrops (one or more pixels) above the horizon removed (filtered)].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wu, Qi et al. , “RAINDROP DETECTION AND REMOVAL USING SALIENT VISUAL FEATURES”, In 2012 19th IEEE International Conference on Image Processing, pp. 941-944. IEEE, 2012 as applied to claim 18 above, and further in view of Zhang et al. (US 8605947 B2).
As to claim 19, Wu does not disclose, wherein the processing circuitry is further to determine, using one or more neural networks and based at least on initial image data corresponding to the image, the one or more pixels that are associated with the compromised visibility, wherein the initial image data corresponds to the filtered image data prior to the filtering.
Zhang discloses method to determine a clear path for autonomous or semi-autonomous driving in accordance with the disclosure. Image 10 (initial image data prior to filtering) is depicted including ground 20, horizon 30, and objects 40. Image 10 is collected by camera 110 and represents the road environment in front of vehicle 100. Ground 20 represents the zone of all available paths open to travel without regard to any potential objects. The method of FIG. 3 that determines a clear path upon ground 20 starts by presuming all of ground 20 is clear, and then utilizes available data to disqualify portions of ground 20 as not clear (filtered image). The method of FIG. 3 instead analyzes ground 20 and seeks to define a clear path confidence likelihood from available data that some detectable anomaly which may represent object 40 limits or makes not clear that portion of ground 20. This focus upon ground 20 instead of objects 40 avoids the complex computational tasks associated with managing the detection of the objects. Individual classification and tracking of individual objects is unnecessary, as individual objects 40 are simply grouped together as a part of the overall uniform limitation upon ground 20. Ground 20, described above as all paths open to travel without discrimination, minus limits placed on ground 20 by areas found to be not clear, define clear path 50 (filtered image), depicted in FIG. 3 as the area within the dotted lines, or an area with some threshold confidence likelihood of being open for travel of vehicle 100 (Col. 5, lines 32-57 ). Object 40 that creates not clear limitations upon ground 20 can take many forms. For example, an object 40 can represent a discrete object such as a parked car, a pedestrian, or a road obstacle, or object 40 can also represent a less discreet change to surface patterns indicating an edge to a road, such as a road-side curb, a grass line, or water covering the roadway (compromised visibility) (Col. 5,lines (58-63). Not clear limitations also could be a low lighting environments or when contrast is poor due to glare (compromised visibility) (Col. 12, lines 8-9). During operation, the camera 110 generates an image (initial image) for analysis in the processing module 120 (202). The processing module 120 identifies patches (group of pixels) in the image and selects a patch for analysis (204) (Col. 10, lines 54-57). Once a patch 60 has been identified for analysis, processing module 120 processes the patch by applying known feature identification algorithms to the patch (Col. 6, lines 45-47). Feature identification algorithms search available visual information for characteristic patterns in the image associated with an object including features defined by line orientation, line location, color, corner characteristics, other visual attributes, and learned attributes. Feature identification algorithms may be applied to sequential images to identify changes corresponding to vehicle motion, wherein changes not associated with ground movement may be identified not clear path. Learned attributes may be learned by machine learning algorithms (neural networks) within the vehicle (col. 6 , lines 49-59). The decision for the component patch being analyzed can be based upon pixels contained within the patch, for example, with the patch being determined to be unclear (compromised visibility) if any or some minimum number of pixels within the patch are determined to be not clear (Col. 10, lines 25-29). As described hereinabove, patch-based methods are computationally, relatively fast (col. , lines 39-41). A first exemplary filtering method removes pixels above a horizon or vanishing point, including sky and other vertical features that cannot be part of a road surface (456). The term "vanishing point" as used herein is a broad term, and is to be given its ordinary and customary meaning to one ordinarily skilled in the art, and refers to an infinite far point on the horizon that is intersected by multiple parallel lines on the ground in the view. Identifying a road surface creating a clear path on which to drive is necessarily below the vanishing point or horizon line. Filtering images to only analyze an area below the horizon line helps to clarify the pixels being analyzed to identify a road surface from irrelevant pixels (compromised visibility pixels) (Col. 15, lines 1-12).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention was made to use the teachings of Zhang to modify the processor of Wu by determining using one or more neural networks and based at least on initial image data corresponding to the image, the one or more pixels that are associated with the compromised visibility, wherein the initial image data corresponds to the filtered image data prior to the filtering in order to avoid the complex computational tasks associated with managing the detection of the objects (Col. 5, lines 48-49) and increase the speed of computation and save computational resources (col. , lines 39-41).
As to claim 20, Zhang further discloses, wherein the processor is comprised in at least one of:
a control system for an autonomous or semi-autonomous machine [to actuate autonomous driving control by means known in the art and not described herein, or processing module 120 may simply provide information to a separate autonomous driving system (col. 5 , lines 11-14 ). A clear path for autonomous or semi-autonomous driving in accordance with the disclosure (col. 5 , lines 33-34 )].
a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations [Vision systems provide an alternate source of sensor input for use in vehicle control systems (col. 26 , lines 23-24 )].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMIR ANWAR AHMED whose telephone number is (571)272-7413. The examiner can normally be reached flex.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Edward Urban can be reached at (571)272-7899. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAMIR A AHMED/ Primary Examiner, Art Unit 2665