DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 1 and 13 are objected to because of the following informalities:
For claims 1 and 13, remove “and” at the end of fourth limitation.
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“a parking space recognition device configured to…”
“a data inspection device configured to…”
“a data inspection device configured to…”
“a recognition device configured to…” in claim 1;
“a position accuracy determination device configured to…”
“a type accuracy determination device configured to…”
“a position consistency analysis device configured to…”
“a type accuracy analysis device configured to…” in claim 3; and
“a trigger generator configured to…” in claim 8
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Specifications in para [0050] – [0051] recites that the data inspection device includes a controller that may be a hardware device such as a processor or CPU. Specifications in para [0107] recites that the structure of the devices of the computing device and each of the components of the devices may be implemented in the form of an independent hardware including a memory and at least one processor.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
Claim 7 and 16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 7 recites the limitation "type information of the parking space recognition data" (emphasis added with underline). There is insufficient antecedent basis for this limitation in the claim as claim 3 previously defines “type information”. It is unclear and confusing to one of the ordinary skill in the art if the type information obtained in claim 3 for the parking line and parking slot is different from the type information obtained in claim 7 for the parking space recognition data, as parking space recognition data includes type of the parking line and parking slot as recited in claim 1.
Claim 16 recites the limitation "type information of the parking space recognition data" (emphasis added with underline). There is insufficient antecedent basis for this limitation in the claim and claim 14 defines “type information”. It is unclear and confusing to one of the ordinary skill in the art if the type information obtained in claim 3 for the parking line and parking slot is different from the type information obtained in claim 7 for the parking space recognition data, as parking space recognition data includes type of the parking line and parking slot as recited in claim 13.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 – 19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process (concept performed in a human mind, including as observation, evaluation, judgment, opinion, organizing human activity and mathematical concepts and calculations). The claim(s) recite(s) a method, and computer-readable storage medium configured to detect a focus of attention. This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved .The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such except for the generic computer elements at high level of generality (i.e., processor, memory).
According to the USPTO guidelines, a claim is directed to non-statutory subject matter if:
• STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or
• STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis:
o STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
o STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
o STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that claims 1 and 10 are directed to an abstract idea as shown below:
STEP 1: Do the claims fall within one of the four statutory categories? YES. Claims 1, and 13 are directed to a device and a method.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? YES, the claims recite steps that fall into the abstract idea category of mental processes.
With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas:
• Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations;
• Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and
• Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion).
The system in claim 1, and methods in claims 9 and 13 comprise a mental process that can be practicably performed in the human mind (or generic computers or components configured to perform the method) and, therefore, an abstract idea.
Regarding Claims 1 and 13: A computing device, comprising:
a parking space recognition device configured to recognize a parking space using at least one parking space recognition model configured to recognize the parking space based on training data for the parking space (mental process including observation and evaluation, and can be done mentally in the human mind or a generic computer program or components configured to perform the method; recognizing parking space based on the data…);
a data inspection device configured to inspect information about a parking line and a parking slot based on parking space recognition data recognized by the parking space recognition device (mental process including observation and evaluation, and can be done mentally in the human mind or a generic computer program or components configured to perform the method; inspect information about parking line…);
the at least one parking space recognition model configured to divide an input image for the parking space into cells with a predetermined size and number, and to recognize a position and a type of the parking line and the parking slot for each divided cell (mental process including observation and evaluation, and can be done mentally in the human mind or a generic computer program or components configured to perform the method; dividing an image into cells and recognizing a position and type… recognition model can be a generic computer program…); and
a recognition device configured to combine results recognized from the at least one parking space recognition model to recognize the parking space and output the parking space recognition data (mental process including observation and evaluation, and can be done mentally in the human mind or a generic computer program or components configured to perform the method; combining and results and recognizing…); and
wherein the parking space recognition data includes parking line information about a position of the parking line and a type of the parking line and parking slot information about a position of the parking slot and a type of the parking slot (mental process including observation and evaluation, and can be done mentally in the human mind or a generic computer program or components configured to perform the method; combining and results and recognizing…).
These limitations, as drafted, is a simple process that, under their broadest reasonable interpretation, covers performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). As such, a person could mentally analyze an image and determine a fill level, either mentally or using a pen and paper. The mere nominal recitation that the various steps are being executed by a device/in a device (e.g. processing unit) does not take the limitations out of the mental process grouping.
The use of algorithm or machine learning model to identify segmented regions of pixels and then determining and performing action based on the outcome is a common pattern of data input, analysis, and output, which courts have consistently held as abstract.
The claimed functions –recognition, inspection and dividing – could be performed conceptually by a human using pen and paper, and thus fall under abstract mental steps.
Conclusions: Thus, the claims are directed to an abstract idea.
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO, the claims do not recite additional elements that integrate the judicial exception into a practical application.
With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application:
an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application:
an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea;
an additional element adds insignificant extra-solution activity to the judicial exception; and
an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use.
Claims 1 and 13 does/do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application.
These limitations are recited at a high level of generality (i.e. as a general action or change being taken based on the results of the acquiring step) and amounts to mere post solution actions, which is a form of insignificant extra-solution activity. Further, the claims are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception.
• Merely stating that the functions are performed by “an algorithm or model” does not demonstrate a technological improvement.
• There is no indication that the method improves the functioning of a computer, the machine learning model, or classification itself.
Conclusion: Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? NO, the claims do not recite additional elements that amount to significantly more than the judicial exception.
With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements:
adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or
simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.
Claims 1 and 13 does/do not recite any additional elements that are not well-understood, routine or conventional.
• The claims lack an inventive concept sufficient to transform the abstract idea into patent-eligible subject matter.
• The use of model, performing recognition and inspection based on received data, is routine and conventional in the field of machine learning.
• The claims are functionally generic with no details about architecture, training, dataset specifics, or a novel arrangement of components.
Conclusion: The claims does not add significantly more than the abstract idea.
Final Determination: INELIGIBLE under 35 U.S.C. 101. The Claims 1 and 13 are: directed toward an abstract idea (mental process and data manipulation) using conventional tools (models) in a generic way, without integration into a practical application or an inventive concept.
Regarding Claims 2 – 12 and 14 – 19: the additional elements recited in the claims do not integrate the mental process into a practical application or add significantly more to the mental process. The limitations merely recite that the functions are performed “by model” does not demonstrate a technological improvement. The additional limitations further recite calculations that are mathematical concepts and fall under abstract ideas. The claims are functionally generic with no details about architecture, training, dataset specifics, or a novel arrangement of components. Since the claims are directed toward an abstract idea (mental process and data manipulation) using conventional tools in a generic way, without integration into a practical application or an inventive concept, they are ineligible under 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3, 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al (US20240029448A1; hereafter referred to as Li) in view of Hwang et al. (See Machine Translation for KR 20240097012 A; hereafter referred to as Hwang).
Regarding Claim 1, Li teaches:
A computing device, comprising:
a parking space recognition device configured ([0018] A parking space detection device is provided) to recognize a parking space using at least one parking space recognition model configured to recognize the parking space based on training data for the parking space ([0041] “ the parking space detection apparatus 102 may use a pre-trained neural network model to recognize the recognized parking spaces in each detection image frame and multiple parking space corners of each recognized parking space”); and
a recognition device configured to combine results recognized from the at least one parking space recognition model to recognize the parking space and output the parking space recognition data (Li, [0105] “parking space semantic information of the verified parking space is determined and outputted based on the parking space corners of the verified parking space”);
While Li teaches obtaining parking line information and type information (Li, [0106] “the parking space semantic information may include one or more of a parking space corner position, a parking space corner order, a main road direction, a parking space entrance side, a parking space depth, a parking space width, a parking space orientation, a parking space direction type, and a parking space available parking area”), Li fails to explicitly teach:
a data inspection device configured to inspect information about a parking line and a parking slot based on parking space recognition data recognized by the parking space recognition device;
wherein the parking space recognition device includes:
the at least one parking space recognition model configured to divide an input image for the parking space into cells with a predetermined size and number, and to recognize a position and a type of the parking line and the parking slot for each divided cell; and
wherein the parking space recognition data includes parking line information about a position of the parking line and a type of the parking line and parking slot information about a position of the parking slot and a type of the parking slot.
In the same field of endeavor, Hwang teaches:
a data inspection device configured to inspect information about a parking line and a parking slot based on parking space recognition data recognized by the parking space recognition device (Hwang, [0027] “case of a parking space image with a parking partition line drawn, the parking area detection unit 120 may identify the parking area by specifying the parking area based on the parking partition line and dividing it into a plurality of parking partitions”);
wherein the parking space recognition device includes:
the at least one parking space recognition model configured to divide an input image for the parking space into cells with a predetermined size and number, and to recognize a position and a type of the parking line and the parking slot for each divided cell (Hwang, [0028] “parking area information may be an edited image in which a received parking space image includes a mark indicating a parking area divided into multiple parking compartments”); and
a recognition device configured to combine results recognized from the at least one parking space recognition model to recognize the parking space and output the parking space recognition data (Hwang, [0026] “The parking area detection unit 120 detects the parking area in the parking space image received by the image receiver 110 and outputs parking area information. More specifically, the parking area detection unit 120 identifies a parking area in a parking space image and recognizes the specified parking area by dividing it into one or more parking compartments”); and
wherein the parking space recognition data includes parking line information about a position of the parking line and a type of the parking line and parking slot information about a position of the parking slot and a type of the parking slot (Hwang, [0027] “In the case of a parking space image with a parking partition line drawn, the parking area detection unit 120 may identify the parking area by specifying the parking area based on the parking partition line and dividing it into a plurality of parking partitions”; Hwang, [0033] “The parking partition calculation unit 130 calculates a partition position indicating the position in the parking space image for each of the parking partitions recognized by the parking area detection unit 120. To calculate the partition location, the parking partition calculation unit 130 may use parking area information input from the parking area detection unit 120. And the parking compartment calculation unit 130 generates and outputs parking compartment location information, which is information about the calculated compartment location”; Hwang, [0031] “the parking zone detection unit 120 may additionally recognize the parking zone type for each recognized parking zone and include it in the parking zone information. Here, ‘parking compartment type’ indicates the use or characteristics of each parking compartment, and the type may be predetermined depending on the purpose of parking control”).
Li and Hwang are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention on Li with the invention of Hwang to make the invention that divides the input image for the parking space into partitions; inspects the parking space information for parking line and slots and provide the parking position information and parking type information; doing so can provide adaptive learning-based parking information provision device (Hwang, [0007]); thus one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 3, Li in view of Hwang teaches the computing device of claim 1, wherein the data inspection device includes:
a position accuracy determination device configured to determine position accuracy of the parking space recognition data based on position information of the parking line and the parking slot (Li, [0106] “the parking space semantic information may include one or more of a parking space corner position, a parking space width, a parking space orientation, a parking space direction type”; Li, [0187] “can detect the position of the parking space more accurately”);
a type accuracy determination device configured to determine type accuracy of the parking space recognition data based on type information of the parking line and the parking slot (Hwang, [0013] “more accurate parking information can be provided by identifying the parking type for each parking compartment”; Hwang, [0061] “a more accurate parking type judgment can be made”);
a position consistency analysis device configured to analyze position consistency of the parking space recognition data based on the position information of the parking line and the parking slot (Li, [0115] “ matching the parking space corner order of the verified parking space in a current detection image frame with the parking space corner order of the verified parking space in a previous detection image frame, to cause the parking space corner order in the current detection image frame to be consistent with the parking space corner order in the previous detection image frame”);
a type accuracy analysis device configured to analyze type consistency of the parking space recognition data based on the type information of the parking line and the parking slot (Li, [0117] “[0117] Taking each parking space including four corners as an example, in a case of determining that the first verified parking space A in the former detection image frame and the second verified parking space B in the latter detection image frame are the same parking space, the parking space corner order of the second verified parking space B may be made to be consistent with the parking space corner order of the first verified parking space A through the method for matching parking space corner orders in the following steps a-c”); and
a controller configured to output the result of inspecting the parking space recognition data based on the result of analyzing accuracy and consistency for the position and the type of the parking line and the parking slot (Li, [0131] “the determining and outputting, based on the parking space corners of the verified parking space, parking space semantic information of the verified parking space in step S250 further includes: configuring the parking space corner order based on user input”; Hwang, [0033] “And the parking compartment calculation unit 130 generates and outputs parking compartment location information, which is information about the calculated compartment location. Parking lot location information is an example of parking lot information).
Regarding Claim 13, Li teaches:
A method for inspecting training data, the method comprising:
recognizing, by a parking space recognition device, a parking space using at least one parking space recognition model configured to recognize the parking space based on training data for the parking space ([0018] “A parking space detection device is provided”; [0041] “ the parking space detection apparatus 102 may use a pre-trained neural network model to recognize the recognized parking spaces in each detection image frame and multiple parking space corners of each recognized parking space”); and
inspecting information about a parking line and a parking slot based on the recognized parking space recognition data (Li, [0105] “parking space semantic information of the verified parking space is determined and outputted based on the parking space corners of the verified parking space”);
While Li teaches obtaining parking line information and type information (Li, [0106] “the parking space semantic information may include one or more of a parking space corner position, a parking space corner order, a main road direction, a parking space entrance side, a parking space depth, a parking space width, a parking space orientation, a parking space direction type, and a parking space available parking area”), Li fails to explicitly teach:
inspecting information about a parking line and a parking slot based on the recognized parking space recognition data;
wherein the recognizing of the parking space includes:
dividing, by the at least one parking space recognition model, an input image for the parking space into cells with a predetermined size and number and recognizing, by the at least one parking space recognition model, a position and a type of the parking line and the parking slot for each divided cell; and
wherein the parking space recognition data includes parking line information about a position of the parking line and a type of the parking line and parking slot information about a position of the parking slot and a type of the parking slot.
In the same field of endeavor, Hwang teaches:
inspecting information about a parking line and a parking slot based on the recognized parking space recognition data (Hwang, [0027] “case of a parking space image with a parking partition line drawn, the parking area detection unit 120 may identify the parking area by specifying the parking area based on the parking partition line and dividing it into a plurality of parking partitions”);
wherein the recognizing of the parking space includes:
dividing, by the at least one parking space recognition model, an input image for the parking space into cells with a predetermined size and number and recognizing, by the at least one parking space recognition model, a position and a type of the parking line and the parking slot for each divided cell (Hwang, [0028] “parking area information may be an edited image in which a received parking space image includes a mark indicating a parking area divided into multiple parking compartments”); and
combining results recognized from the at least one parking space recognition model to recognize the parking space and outputting the parking space recognition data (Hwang, [0026] “The parking area detection unit 120 detects the parking area in the parking space image received by the image receiver 110 and outputs parking area information. More specifically, the parking area detection unit 120 identifies a parking area in a parking space image and recognizes the specified parking area by dividing it into one or more parking compartments”); and
wherein the parking space recognition data includes parking line information about a position of the parking line and a type of the parking line and parking slot information about a position of the parking slot and a type of the parking slot (Hwang, [0027] “In the case of a parking space image with a parking partition line drawn, the parking area detection unit 120 may identify the parking area by specifying the parking area based on the parking partition line and dividing it into a plurality of parking partitions”; Hwang, [0033] “The parking partition calculation unit 130 calculates a partition position indicating the position in the parking space image for each of the parking partitions recognized by the parking area detection unit 120. To calculate the partition location, the parking partition calculation unit 130 may use parking area information input from the parking area detection unit 120. And the parking compartment calculation unit 130 generates and outputs parking compartment location information, which is information about the calculated compartment location”; Hwang, [0031] “the parking zone detection unit 120 may additionally recognize the parking zone type for each recognized parking zone and include it in the parking zone information. Here, ‘parking compartment type’ indicates the use or characteristics of each parking compartment, and the type may be predetermined depending on the purpose of parking control”).
Li and Hwang are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention on Li with the invention of Hwang to make the invention that divides the input image for the parking space into partitions; inspects the parking space information for parking line and slots and provide the parking position information and parking type information; doing so can provide adaptive learning-based parking information provision device (Hwang, [0007]); thus one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 14, Li in view of Hwang teaches the method of claim 13, wherein the inspecting includes:
determining position accuracy of the parking space recognition data based on position information of the parking line and the parking slot (Li, [0106] “the parking space semantic information may include one or more of a parking space corner position, a parking space width, a parking space orientation, a parking space direction type”; Li, [0187] “can detect the position of the parking space more accurately”);
determining type accuracy of the parking space recognition data based on type information of the parking line and the parking slot (Hwang, [0013] “more accurate parking information can be provided by identifying the parking type for each parking compartment”; Hwang, [0061] “a more accurate parking type judgment can be made”);
analyzing position consistency of the parking space recognition data based on the position information of the parking line and the parking slot (Li, [0115] “ matching the parking space corner order of the verified parking space in a current detection image frame with the parking space corner order of the verified parking space in a previous detection image frame, to cause the parking space corner order in the current detection image frame to be consistent with the parking space corner order in the previous detection image frame”);
analyzing type consistency of the parking space recognition data based on the type information of the parking line and the parking slot (Li, [0117] “Taking each parking space including four corners as an example, in a case of determining that the first verified parking space A in the former detection image frame and the second verified parking space B in the latter detection image frame are the same parking space, the parking space corner order of the second verified parking space B may be made to be consistent with the parking space corner order of the first verified parking space A through the method for matching parking space corner orders in the following steps a-c”); and
outputting the result of inspecting the parking space recognition data based on the result of analyzing accuracy and consistency for the position and the type of the parking line and the parking slot (Li, [0131] “the determining and outputting, based on the parking space corners of the verified parking space, parking space semantic information of the verified parking space in step S250 further includes: configuring the parking space corner order based on user input”; Hwang, [0033] “And the parking compartment calculation unit 130 generates and outputs parking compartment location information, which is information about the calculated compartment location. Parking lot location information is an example of parking lot information).
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Li et al (US20240029448A1; hereafter referred to as Li) in view of Hwang et al. (See Machine Translation for KR 20240097012 A; hereafter referred to as Hwang) further in view of Awan et al. (Awan, F. M., Saleem, Y., Minerva, R., & Crespi, N. (2020). A comparative analysis of machine/deep learning models for parking space availability prediction. Sensors, 20(1), 322; hereafter referred to as Awan).
Regarding Claim 2, Li in view of Hwang teaches the computing device of claim 1 but fails to explicitly teach:
wherein the parking space recognition device is formed as an ensemble model in which a plurality of parking space recognition models and the recognition device are coupled to each other.
In the same field of endeavor, Awan teaches:
wherein the parking space recognition device is formed as an ensemble model in which a plurality of parking space recognition models and the recognition device are coupled to each other (Awan, page 6, Fig. 3, 3.4 Ensemble learning approach, “we combined MLP, KNN, Decision Tree, and Random Forest algorithms to solve the problem of predicting the availability of parking spaces. The Ensemble Learning approach takes the training data and trains each model. After the training process, the Ensemble Learning approach feeds the testing data to the models and then each model predicts a class label for each sample in the testing data”).
Li, Hwang and Awan are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention on Li in view of Hwang with the invention of Awan to make the invention that forms the parking space recognition device as an ensemble model in which a plurality of parking space recognition models and the recognition device are coupled to each other; doing so can yield higher prediction accuracy for prediction of parking space availability (Awan, Abstract); thus one of the ordinary skill in the art would have been motivated to combine the references.
Allowable Subject Matter
Claims 4 – 12 and 15 – 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and overcoming any claim objections and claim rejections under 35 U.S.C. 112(b) and 35 U.S.C. 101.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20230331215 A1 Method Of Determining Parking Area Layout
US 20220219679 A1 SPATIAL PARKING PLACE DETECTION METHOD AND DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT
US 20250109947 A1 NAVIGATION SYSTEM WITH PARKING SPACE AND OBSTRUCTION DETECTION MECHANISM AND METHOD OF OPERATION THEREOF
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAISALI RAO KOPPOLU whose telephone number is (571)270-0273. The examiner can normally be reached Monday - Friday 8:30 - 5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format.
For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
VAISALI RAO. KOPPOLU
Examiner
Art Unit 2664
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664