Prosecution Insights
Last updated: April 19, 2026
Application No. 18/681,575

SYSTEM AND METHOD FOR OBJECT MEASUREMENT

Non-Final OA §101§102§103§112
Filed
Feb 06, 2024
Examiner
KOPPOLU, VAISALI RAO
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Moldova Aviation Services (2001)Ltd
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
89 granted / 113 resolved
+16.8% vs TC avg
Strong +27% interview lift
Without
With
+26.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
22 currently pending
Career history
135
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
13.3%
-26.7% vs TC avg
§112
25.5%
-14.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 113 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The disclosure is objected to because of the following informalities: Specifications in para [0050] of PGPUB US 20250131543 A1 recites “The reference object is an object that his size measurement are predetrmine and known by the system 200. The reference object can be a physical object of a three or two damentios” (emphasis added with underline). There are grammatical and typographical errors. Appropriate correction is required. Claim Objections Claims 1 and 14 are objected to because of the following informalities: Claim 1: add “and” after the limitation “at least one of said modules used for identifying said measurement object in said at least one picture;” Claim 14: add “and” after the limitation “calculating the size of said measured object relative to predetermine known size of said at least one referenced object detection; calculating the size of said measured object relative to predetermine known size of said at least one referenced object detection;” Claim 14 recites “wherein if said size of said at least one measured object is bigger then the size of a predetermine object size said at least one measured object is not confirmed, otherwise said at least one measured object is confirmed” (emphasis added with underline). This limitation has grammatical error. It should read as “wherein if said size of said at least one measured object is bigger than the size of a predetermine object size, said at least one measured object is not confirmed, otherwise said at least one measured object is confirmed” (suggested changes emphasized with underline). Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 14 - 24 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 14 recites the limitation “a machine learning (ML) algorithm for detecting objects” in third limitation and recites the limitation “algorithm” in fourth and fifth limitations. There is insufficient antecedent basis for this limitation in the claim as it is unclear and confusing to one of the ordinary skill in the art if the algorithm to identify said reference object is same as the algorithm to identify said measured object or if they are two different algorithms. It is further unclear and confusing if the machine learning algorithm recited in the third limitation is different form the algorithm recited in fourth and fifth limitations. Appropriate corrections are required. Claims 15 – 24 are rejected for being directly or indirectly dependent on rejected claim 14. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process (concept performed in a human mind, including as observation, evaluation, judgment, opinion, organizing human activity and mathematical concepts and calculations). The claim(s) recite(s) a method, and computer-readable storage medium configured to detect a focus of attention. This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved .The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such except for the generic computer elements at high level of generality (i.e., processor, memory). According to the USPTO guidelines, a claim is directed to non-statutory subject matter if: STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Using the two-step inquiry, it is clear that claims 1 and 10 are directed to an abstract idea as shown below: STEP 1: Do the claims fall within one of the four statutory categories? YES. Claims 1 and 14 are directed to a system and a method. STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? YES, the claims recite steps that fall into the abstract idea category of mental processes. With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas: Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations; Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion). The system in claim 1, and method in claim 14 comprise a mental process that can be practicably performed in the human mind (or generic computers or components configured to perform the method) and, therefore, an abstract idea. Regarding Claim 1: A system for object measurement comprising: whereby, at least one of said modules used for identifying said referenced object in said at least one picture (mental process including observation and evaluation, and can be done mentally in the human mind or a generic computer program or components configured to perform the method; identifying reference object in the picture...); at least one of said modules used for identifying said measurement object in said at least one picture (mental process including observation and evaluation, and can be done mentally in the human mind or a generic computer program or components configured to perform the method; identifying measurement object in the picture...); at least one of said modules used for calculating the size of said at least one measured object, utilizing the predetermine known size of said referenced object detection (mental process including observation and evaluation, and can be done mentally in the human mind or a generic computer program or components configured to perform the method; calculating the size...); Regarding Claims 14: the method recites the steps (functions) of: placing at least one object reference and object measurement aligned to one another (mental process including observation and evaluation, and can be done mentally in the human mind or a generic computer program or components configured to perform the method; machine learning model can be generic computer program that takes a digital image as an input and generates an output…); applying a machine learning (ML) algorithm for detecting objects in said at least one picture (mental process including observation and evaluation, and can be done mentally in the human mind or a generic computer program or components configured to perform the method; applying a machine learning algorithm…); applying an algorithm to identify said referenced object in said at least one picture (mental process including observation and evaluation, and can be done mentally in the human mind or a generic computer program or components configured to perform the method; applying an algorithm …); calculating the size of said measured object relative to predetermine known size of said at least one referenced object detection (mental process including observation and evaluation, and can be done mentally in the human mind or a generic computer program or components configured to perform the method; calculating the size …); wherein if said size of said at least one measured object is bigger then the size of a predetermine object size said at least one measured object is not confirmed, otherwise said at least one measured object is confirmed detection (mental process including observation and evaluation, and can be done mentally in the human mind or a generic computer program or components configured to perform the method); These limitations, as drafted, is a simple process that, under their broadest reasonable interpretation, covers performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). As such, a person could mentally analyze an image and determine a fill level, either mentally or using a pen and paper. The mere nominal recitation that the various steps are being executed by a device/in a device (e.g. processing unit) does not take the limitations out of the mental process grouping. The use of algorithm or machine learning model to identify objects and then determining and performing action based on the outcome is a common pattern of data input, analysis, and output, which courts have consistently held as abstract. The claimed functions –receiving data, identification and calculating action – could be performed conceptually by a human using pen and paper, and thus fall under abstract mental steps. Conclusions: Thus, the claims are directed to an abstract idea. STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO, the claims do not recite additional elements that integrate the judicial exception into a practical application. With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application: an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field; an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition; an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim; an additional element effects a transformation or reduction of a particular article to a different state or thing; and an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application: an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; an additional element adds insignificant extra-solution activity to the judicial exception; and an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use. Claims 1 and 14 does/do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. These limitations are recited at a high level of generality (i.e. as a general action or change being taken based on the results of the acquiring step) and amounts to mere post solution actions, which is a form of insignificant extra-solution activity. Further, the claims are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Merely stating that the functions are performed by “an algorithm or machine learning algorithm” does not demonstrate a technological improvement. There is no indication that the method improves the functioning of a computer, the machine learning model, or classification itself. Conclusion: Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? NO, the claims do not recite additional elements that amount to significantly more than the judicial exception. With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements: adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present. Claims 1, and 14 does/do not recite any additional elements that are not well-understood, routine or conventional. The claims lack an inventive concept sufficient to transform the abstract idea into patent-eligible subject matter. The use of algorithm, performing standard classification and modification based on received data, is routine and conventional in the field of machine learning. The claims are functionally generic with no details about architecture, training, dataset specifics, or a novel arrangement of components. Conclusion: The claims does not add significantly more than the abstract idea. Final Determination: INELIGIBLE under 35 U.S.C. 101. The Claims 1 and 14 are: directed towards an abstract idea (mental process and data manipulation) using conventional tools (machine learning algorithms or algorithms) in a generic way, without integration into a practical application or an inventive concept. Regarding Claims 2 – 13, and 15 – 24: the additional elements recited in the claims do not integrate the mental process into a practical application or add significantly more to the mental process. The limitations merely recite that the functions are performed “by machine learning model” does not demonstrate a technological improvement. The claims are functionally generic with no details about architecture, training, dataset specifics, or a novel arrangement of components. Since the claims are directed toward an abstract idea (mental process and data manipulation) using conventional tools in a generic way, without integration into a practical application or an inventive concept, they are ineligible under 35 U.S.C. 101. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. (g)(1) during the course of an interference conducted under section 135 or section 291, another inventor involved therein establishes, to the extent permitted in section 104, that before such person’s invention thereof the invention was made by such other inventor and not abandoned, suppressed, or concealed, or (2) before such person’s invention thereof, the invention was made in this country by another inventor who had not abandoned, suppressed, or concealed it. In determining priority of invention under this subsection, there shall be considered not only the respective dates of conception and reduction to practice of the invention, but also the reasonable diligence of one who was first to conceive and last to reduce to practice, from a time prior to conception by the other. A rejection on this statutory basis (35 U.S.C. 102(g) as in force on March 15, 2013) is appropriate in an application or patent that is examined under the first to file provisions of the AIA if it also contains or contained at any time (1) a claim to an invention having an effective filing date as defined in 35 U.S.C. 100(i) that is before March 16, 2013 or (2) a specific reference under 35 U.S.C. 120, 121, or 365(c) to any patent or application that contains or contained at any time such a claim. Claims 1 – 2, 6, 9 – 11, 13 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Scott et al. (Machine Translation for CN109141250A; hereafter referred to as Mathew). Regarding Claim 1, Scott teaches: A system for object measurement comprising (page 2, summary of the invention, “A method for measuring the size of a luggage, comprising): at least one user device (page 2, summary, “a bag size measuring device”), having: at least one camera, at least one module performing one or more tasks, at least one memory unit for storing, loading, and/or maintain said at least one modules (page 2, last para, “acquiring the image of the predetermined area by using the camera module”); said user device further having at least one processor interpreting and/or executing computer- readable instructions (Page 3, para 12, “luggage size measuring server comprising a memory for storing a computer program, the processor running the computer program to cause the luggage size measuring server to perform the luggage size measurement method”); in said processor access and/or modify and execute said at least one module stored in said memory (page 3, para 13, “The present invention also provides a computer storage medium storing the computer program used in the bag size measurement server”); at least one communication unit to communicate with at least one communication network and a user interface (page 5, step S13, para 2, “The calculated result can also be displayed using a display screen that is connected to the server”); said system further comprising, at least one reference object with predetermine known size (page 5, step S13, “the image of the preset reference object, and the size of the preset reference object”); wherein, said at least one camera take picture of said at least one reference object and said at least one measurement object (page 5, step S13, para 2, the image of the luggage and the image of the preset reference object are acquired”); whereby, at least one of said modules used for identifying said referenced object in said at least one picture (page 5, step 12, “An image of a preset reference. The standard size of the preset reference object is already stored in the server. The reference object recognition model is stored in the server and started after the image is input.”); at least one of said modules used for identifying said measurement object in said at least one picture (page 5, step s12, “determined that at least one of the luggage images exists in the image of the predetermined area”); and at least one of said modules used for calculating the size of said at least one measured object, utilizing the predetermine known size of said referenced object detection (page 5, “Step S13: Calculate the sizes of all the bags corresponding to all the bag images according to the acquired image of all the bags, the image of the preset reference object, and the size of the preset reference object… The calculation process can be implemented by using an algorithm or an application. For example, an edge processing feature of the bag and the reference object can be extracted in the server by using an image processing algorithm, and compared with the size of the edge of the reference object to obtain the size of the bag. The calculated result can also be displayed using a display screen that is connected to the server so that the staff can arrange the storage of the luggage.”). Regarding Claim 2, Scott teaches the system according to claim 1, wherein said at least one module applying a Mask R-CNN machine learning algorithm for identifying objects in said pictures taken by said camera (page 5, step S12, para 2, “the reference object recognition model may also be a deep learning model or the like, including a depth learning unit and an output end, and the reference object recognition model receives the image for analysis using the depth learning unit, and then outputs the analysis result from the output end. The deep learning unit may be an RNN learning unit, a CNN learning unit”). Regarding Claim 6, Scott teaches the system according to claim 1, wherein said at least one reference object is a three-dimensional object (page 7, Step S36, “a three-dimensional model of the aircraft luggage compartment can be pre-established in the server, and the storage locations corresponding to different size ranges can be divided”). Regarding Claim 9, Scott teaches the system according to claim 1 wherein, the reference object positioned on the measured object or near the measured object aligned side by side (page 5, step S12, para 3, “the preset reference object is movably disposed in the predetermined area, that is, the reference object is movable, and can be placed by the staff at any position within the predetermined area when the camera measures the size of the luggage”). Regarding Claim 10, Scott teaches the system according to claim 1, wherein said user device is a smart device having a built-in camera (page 4, step S11, para 3, “the camera module can also be a smart camera module, which is provided with a memory and a processor, and stores the luggage identification model, automatically recognizes the luggage therein after acquiring the image”). Regarding Claim 11, Scott teaches the system according to claim 1, wherein said user device is selected from a group of servers, desktops, laptops, tablets, cellular phones, (e.g., smartphones), personal digital assistants (PDAs), multimedia players, embedded systems, wearable devices (e.g., smart watches, smart glasses, etc.), gaming consoles, combinations of one or more of the same, or any other suitable mobile computing device (page 8, para 3, “The instructions are used to cause a computer device (which may be a smartphone, personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention”). Regarding Claim 13, Scott teaches the system according to claim 1, wherein said system further comprising at least one database and at least one application connected to said database detecting objects in said at least one picture (page 4, step S11, para 2, “The camera module can also be connected to the server, and acquire an image of the predetermined area according to the instruction of the server”); to identify the reference object and the measurement object and to calculating the size of the measurement object by utilizing the reference object (page 4, step S11, para 2, “the bag identification model can be stored in the server, and after the server receives the image sent by the camera module, the image is started in the image Identification”; page 5, Step S13 “Calculate the sizes of all the bags corresponding to all the bag images according to the acquired image of all the bags, the image of the preset reference object, and the size of the preset reference object…after all the image of the luggage and the image of the preset reference object are acquired, and according to the size of the preset reference object stored in the server, the size of the corresponding luggage can be calculated.”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 3 – 4, 8, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Scott et al. (Machine Translation for CN109141250A; hereafter referred to as Mathew) in view of Tschechne (US 20200242721 A1 hereafter referred to as Tschechne). Regarding Claim 3, Scott teaches the system according to claim 1, but fails to explicitly teach: wherein said system further comprising at least one sever for storing information associated with passengers' carry- on bags measurement approval and not approval data and the passenger flights data. In the same field of endeavor, Tschechne teaches: wherein said system further comprising at least one sever for storing information associated with passengers' carry- on bags measurement approval and not approval data and the passenger flights data (Tschechne, [0021] “the image or the images are first compared with an image of pieces of luggage with known dimensions. If the comparison is not successful, that is to say no dimensions can be identified, the central system attempts to automatically determine the dimensions in the image or images. Only if this also fails is the user asked to identify the dimensions through a comparison with a virtual piece of luggage that is superposed on the image”; [0047] “ the volume 11a, 11b is shown to the passenger upon boarding by way of a projector (not illustrated in FIG. 3), which projects their name and the dimensions of the piece of luggage 22 into the luggage compartment 7b, 7d at the location at which the piece of luggage 22 is intended to be stored”). Scott and Tschechne are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Scott with the invention of Tschechne to make the invention that stores information associated with passengers' carry- on bags measurement approval and not approval data and the passenger flights data; doing so can efficiently optimize the load factor of luggage compartment in an aircraft cabin for a flight (Tschechne, [0003]); thus, one of the ordinary skill in the art would have been motivated to combine the references. Regarding Claim 4, Scott in view of Tschechne teaches the system of claim 3, wherein said at least one server is selected from a group of application servers, storage servers, database servers, web servers, cloud servers, and/or any other suitable computing device configured to run certain software applications and/or provide various application, storage, and/or database services (Scott, page 3, para 12, “The present invention also provides a luggage size measuring server comprising a memory for storing a computer program, the processor running the computer program to cause the luggage size measuring server to perform the luggage size measurement method”; Scott, page 7, para 11, “The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to the size of the luggage”). Regarding Claim 8, Scott teaches the system of claim 1, but fails to explicitly teach: wherein said measured object is a carry-on bag that are allowed to enter an airplane. In the same field of endeavor, Tschechne teaches: wherein said measured object is a carry-on bag that are allowed to enter an airplane (Tschechne, [0008] “creation of at least one digital image of a piece of luggage, which is intended to be transported in the luggage compartments during the flight, by a passenger of the flight”). Scott and Tschechne are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Scott with the invention of Tschechne to make the invention that measures object is a carry-on bag that are allowed to enter an airplane; doing so can efficiently optimize the load factor of luggage compartment in an aircraft cabin for a flight (Tschechne, [0003]); thus, one of the ordinary skill in the art would have been motivated to combine the references. Regarding Claim 12, Scott teaches the system of claim 1, but fails to explicitly teach: wherein, said user that uses said system is selected from a group of airlines admin worker, manager/worker of ground airport handling agents, airline-employed staff at check-in counters at airports or through an agency arrangement or by way of a self- service kiosk, airlines subcontract ground handling to airports, handling agents or even to another airline. In the same field of endeavor, Tschechne teaches: wherein, said user that uses said system is selected from a group of airlines admin worker, manager/worker of ground airport handling agents, airline-employed staff at check-in counters at airports or through an agency arrangement or by way of a self- service kiosk, airlines subcontract ground handling to airports, handling agents or even to another airline (Tschechne, [0012] “In order that the size of the pieces of luggage, which are also referred to subsequently as pieces of hand luggage, can be determined reliably, the passenger must first in a step a) create or take one or more images of their piece of luggage using a digital camera. As an alternative, the images also can be produced during check-in at a desk by a person carrying out the check-in, who in this case takes on the role of the passenger at least for step a)”). Scott and Tschechne are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Scott with the invention of Tschechne to make the invention that selects users from a group of airlines admin worker, manager/worker of ground airport handling agents, airline-employed staff at check-in counters at airports or through an agency arrangement or by way of a self- service kiosk; doing so can efficiently optimize the load factor of luggage compartment in an aircraft cabin for a flight (Tschechne, [0003]); thus, one of the ordinary skill in the art would have been motivated to combine the references. Claims 5 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Scott et al. (Machine Translation for CN109141250A; hereafter referred to as Mathew) in view of Schimmel et al. (US 20210279501 A1; hereafter referred to as Schimmel). Regarding Claim 5, Scott teaches the system according to claim 1, but fails to explicitly teach: wherein said at least one reference object is a two-dimensional object. In the same field of endeavor, Schimmel teaches: wherein said at least one reference object is a two-dimensional object (Schimmel, [0010] “the reference tag may be two dimensional or three dimensional”) Scott and Schimmel are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Scott with the invention of Schimmel to make the invention that uses a reference object as a two-dimensional object; doing so can identify two dimensional objects and can yield predictable result (Schimmel, [0013]); thus, one of the ordinary skill in the art would have been motivated to combine the references. Regarding Claim 7, Scott teaches the system according to claim 1, but fails to explicitly teach: wherein said at least one reference object is a QR code. In the same field of endeavor, Schimmel teaches: wherein said at least one reference object is a QR code (Schimmel, [0026] “some embodiments, the reference tag 104 may include a barcode, a Quick Response (QR) code, and/or other information that the dimension measurement system 120 may use to look up the type of the reference tag 104”). Scott and Schimmel are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Scott with the invention of Schimmel to make the invention that uses a QR code as a reference object; doing so can identify two dimensional objects and can yield predictable result (Schimmel, [0013]); thus, one of the ordinary skill in the art would have been motivated to combine the references. Claims 14 and 15 – 21 are rejected under 35 U.S.C. 103 as being unpatentable over Scott et al. (Machine Translation for CN109141250A; hereafter referred to as Mathew) in view of Srinivasan et al. (US 11763209 B1; hereafter referred to as Srinivasan). Regarding Claim 14, Scott teaches: A method for object measurement having at least one user device with at least one camera (page 2, summary, “A method for measuring the size of a luggage, comprising: Obtaining an image of the predetermined area, and determining whether at least one of the image of the predetermined area exists in the image of the predetermined area…the method for measuring the size of the bag includes: before acquiring the image of the predetermined area by using the camera module”), said at least one user device communicating over a communication network and at least one database communicating with said user device over said communication network (page 3, para 13, “The present invention also provides a computer storage medium storing the computer program used in the bag size measurement server”; page 5, step S13, para 2, “The calculated result can also be displayed using a display screen that is connected to the server”) comprising the steps of: placing at least one object reference and object measurement aligned to one another (page 5, step S12, para 2, “the preset reference object is movably disposed in the predetermined area, that is, the reference object is movable, and can be placed by the staff at any position within the predetermined area when the camera measures the size of the luggage”); taking at least one picture of said at least one object measurement and said at least one reference object by said user device camera (page 4, step S11, para 2, “The camera module can also be connected to the server, and acquire an image of the predetermined area according to the instruction of the server, and the bag identification model can be stored in the server, and after the server receives the image sent by the camera module, the image is started in the image Identification”); applying a machine learning (ML) algorithm for detecting objects in said at least one picture (page 4, step S11, para 3, “the bag identification model recognizes that there is a bag in the image, the image of the bag can be extracted and sent to the server”; page 4, last para, “the pre-established luggage identification model may be a deep learning model or the like, that is, the luggage identification model has a deep learning unit”); applying an algorithm to identify said referenced object in said at least one picture (page 5, step S12, para 2, “An image of a preset reference. The standard size of the preset reference object is already stored in the server. The reference object recognition model is stored in the server and started after the image is input”); applying algorithm to identify said measured object in said at least one picture (page 4, last para, “the luggage recognition model receives an image by using a deep learning unit, and then images the image. Analyze to output the analysis results at the output”); calculating the size of said measured object relative to predetermine known size of said at least one referenced object detection (page 5, Step S13: “Calculate the sizes of all the bags corresponding to all the bag images according to the acquired image of all the bags, the image of the preset reference object, and the size of the preset reference object…The calculation process can be implemented by using an algorithm or an application. For example, an edge processing feature of the bag and the reference object can be extracted in the server by using an image processing algorithm, and compared with the size of the edge of the reference object to obtain the size of the bag.”); While Scott teaches the luggage recognition model that acquired the images and using a deep learning unit analyzes the output results (page 4, last para, “the luggage identification model has a deep learning unit and an output end, for example, the luggage recognition model receives an image by using a deep learning unit, and then images the image. Analyze to output the analysis results at the output”), it fails to explicitly teach: wherein if said size of said at least one measured object is bigger then the size of a predetermine object size said at least one measured object is not confirmed, otherwise said at least one measured object is confirmed. In the same field of endeavor, Srinivasan teaches: wherein if said size of said at least one measured object is bigger then the size of a predetermine object size said at least one measured object is not confirmed, otherwise said at least one measured object is confirmed (Srinivasan, col. 9, line 24 – 29, “when the user is only allowed baggage in the cabin that can fit under his or her seat, and the captured dimensions exceed the maximum dimensions associated with the storage space under a seat, the baggage charges are charges for allowing the user to place his or her baggage in an overhead bin”). Scott and Srinivasan are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Scott with the invention of Srinivasa to make the invention that determines if the said object size is bigger than the size of a predetermine object size said at least one measured object is not confirmed, otherwise said at least one measured object is confirmed; doing so can reduce the confusion and frustration for the passengers by limiting the size of carry-on bags/items that the passenger can bring into the cabin of an aircraft (Srinivasan, col. 1, lines 15 - 25); thus, one of the ordinary skill in the art would have been motivated to combine the references. Regarding Claim 16, Scott in view of Srinivasan teaches the method for object measurement according to claim 14, at least two camera pictures are taken by said camera to picture said reference object and said measurement object, one picture for top view and the other picture for side view (Scott, page 4, step S11, para 3, “The camera module may include a plurality of cameras and may be installed around a predetermined area to acquire an image of a predetermined area in all directions”). Regarding Claim 17, Scott in view of Srinivasan teaches the method for object measurement according to claim 14, wherein, said machine learning (ML) algorithm for detecting objects in said at least one picture is a Mask R-CNN algorithm (Scott, page 5, step S12, para 2, “the reference object recognition model may also be a deep learning model or the like, including a depth learning unit and an output end, and the reference object recognition model receives the image for analysis using the depth learning unit, and then outputs the analysis result from the output end. The deep learning unit may be an RNN learning unit, a CNN learning unit”). Regarding Claim 18, Scott in view of Srinivasan teaches the method for object measurement according to claim 14 wherein, said method further comprising the step of isolating or cut editing of said at least one picture to have in a picture, said measured object without said referenced object (Srinivasa, Fig. 8, a baggage picture). Regarding Claim 19, Scott in view of Srinivasan teaches the method for object measurement according to claim 18 wherein, said method further comprising the step of adding size measurements layer on said edited picture (Srinivasa, col. 11, lines 18 – 22, “ use of the system 10 and/or implementation of at least a portion of the method 100 results in the display of dimension lines 155 on a GUI so that the user can see the size of his or her bag relative to the maximum dimension lines 155”). Regarding Claim 20, Scott in view of Srinivasan teaches the method for object measurement according to claim 14 wherein, said method further comprising the step of connecting to at least one database services and sending said measured picture and said results measurements to said user device (Scott, page 5, step S12, para 2, “the reference object recognition model may also be a deep learning model or the like, including a depth learning unit and an output end, and the reference object recognition model receives the image for analysis using the depth learning unit, and then outputs the analysis result from the output end”), (Srinivasan, col. 11, lines 18 – 22, “use of the system 10 and/or implementation of at least a portion of the method 100 results in the display of dimension lines 155 on a GUI so that the user can see the size of his or her bag relative to the maximum dimension lines 155”). Regarding Claim 21, Scott in view of Srinivasan teaches the method for object measurement according to claim 14 wherein, said measurement object is a potential carry-on bag and said method determines if said potential carry-on bag is confirmed or not to enter to a flight airplane cabinet (Srinivasan, col. 10, lines 32 – 35, “when the baggage 60 does not exceed the baggage allowance and it is expected that the baggage 60 will be allowed within the cabin of the aircraft”). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Scott et al. (Machine Translation for CN109141250A; hereafter referred to as Mathew) in view of Srinivasan et al. (US 11763209 B1; hereafter referred to as Srinivasan) further in view of Schimmel et al. (US 20210279501 A1; hereafter referred to as Schimmel). Regarding Claim 15, Scott in view of Srinivasan teaches the method for object measurement according to claim 14, but fails to explicitly teach: wherein, said object reference is a two-dimensional Quick Response (QR) code. In the same field of endeavor, Schimmel teaches: wherein, said object reference is a two-dimensional Quick Response (QR) code (Schimmel, [0026] “some embodiments, the reference tag 104 may include a barcode, a Quick Response (QR) code, and/or other information that the dimension measurement system 120 may use to look up the type of the reference tag 104”). Scott, Srinivasan and Schimmel are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Scott in view of Srinivasan with the invention of Schimmel to make the invention that uses a QR code as a reference object; doing so can identify two dimensional objects and can yield predictable result (Schimmel, [0013]); thus, one of the ordinary skill in the art would have been motivated to combine the references. Claims 22 – 24 are rejected under 35 U.S.C. 103 as being unpatentable over Scott et al. (Machine Translation for CN109141250A; hereafter referred to as Mathew) in view of Srinivasan et al. (US 11763209 B1; hereafter referred to as Srinivasan) further in view of Tschechne (US 20200242721 A1 hereafter referred to as Tschechne). Regarding Claim 22, Scott in view of Srinivasan teaches the method for object measurement according to claim 21, but fails to explicitly teach: further comprising the step of storing said measurement results data and report about a passenger and said passenger potential carry-on bag. In the same field of endeavor, Tschechne teaches: further comprising the step of storing said measurement results data and report about a passenger and said passenger potential carry-on
Read full office action

Prosecution Timeline

Feb 06, 2024
Application Filed
Dec 12, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586356
ARTIFICIAL IMAGE GENERATION WITH TRAFFIC SIGNS
2y 5m to grant Granted Mar 24, 2026
Patent 12579680
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12579824
OCCUPANT DETECTION DEVICE AND OCCUPANT DETECTION METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12573210
PARKING ASSISTANCE DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12573087
OBJECT THREE-DIMENSIONAL LOCALIZATIONS IN IMAGES OR VIDEOS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.8%)
2y 12m
Median Time to Grant
Low
PTA Risk
Based on 113 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month