DETAILED ACTION
This action is in response to a filing filed on August 22nd, 2024. Claims 1-15 have been examined in this application. The Information Disclosure Statement (IDS) filed on January 6th, 2025 has been acknowledged.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more.
Step 1: Claims 1-15 is/are drawn to system (i.e., a manufacture). (Step 1: YES).
Step 2A - Prong One: In prong one of step 2A, the claim(s) is/are analyzed to evaluate whether it/they recite(s) a judicial exception.
Claim 1: A system for managing and diagnosing repairs of a consumer product comprising: a mobile application executed on a mobile device having a processor, the mobile application comprising:
a scanning module for scanning an image of the consumer product by an end- user;
an image analysis module for extracting keypoints in the image;
and a repair application executed on a server, the repair application comprising: an error detection module for determining a ranking of plurality of error messages based on the keypoints and the image, wherein the error messages are stored in a product database associating each of the plurality of error message with a correction instruction;
and a transmission module for transmitting the plurality of error messages to the mobile application, wherein the mobile application displays, to the end-user, a first error message from the plurality of error messages along with an associated correction instruction from the product database.
Claim 9: A system for managing repairs of a consumer product comprising: a mobile application executed on a mobile device having a processor, the mobile application comprising:
a scanning module for scanning an image of the consumer product by an end- user;
an information acquisition module for collecting identifying information related to the image;
a code creation module for creating a unique machine-readable code including a unique identification number, the identifying information and a tracking number, wherein an image of the unique machine-readable code is coupled to packaging housing the consumer product;
a transmission module for transmitting the unique machine-readable code to a repair system;
and a repair system for receiving the consumer product, the repair system comprising:
a receiving module for scanning the machine-readable code coupled to the packaging and extracting the unique identification number, the identifying information, and the tracking number;
and a repair module for matching the unique identification number to the received unique machine-readable code to verify receipt of the consumer product;
a technician module for assigning the consumer product to a technician for repairs;
and a return module for returning the repaired consumer product to the end-user.
(Examiner notes: The underlined claim terms above are interpreted as additional elements beyond the abstract idea and are further analyzed under Step 2A - Prong Two)
Under their broadest reasonable interpretation, the claims are directed to an abstract idea specifically, collecting information, analyzing the information, generating diagnostic results, and displaying those results to a user, which constitutes mental processes and data manipulation. In particular, the claim recites receiving data by scanning an image of a consumer product, analyzing the image to extract keypoints, evaluating and ranking a plurality of error messages based on that information, consulting a database of error messages and associated correction instructions, transmitting the ranked information, and presenting the first error message and corresponding instruction to the user. These limitations, individually and collectively, describe collecting information, analyzing information, comparing information, and presenting results, which are abstract data-processing operations that can be performed mentally or with pen and paper (i.e. The operations include checking a device, identifying problems, comparing them to solutions, prioritizing them, and recommending fixes. A technician can carry out these steps using pen and paper. Implementing these steps on generic computer components (a mobile device, a server, and functional “modules”) merely automates the abstract troubleshooting workflow and does not reflect any specific improvement to computer technology). From applicant’s specification, the claimed invention in FIG. 9 depicts an example milestone screen 3322 showing the real-time status of the repair process for repair order 3902. For example, after the end-user 3202 has shipped the Power tool, the second milestone 4002 can be selected by the end-user 3202 to view the shipping status. The tracking information associated with the Confirmation Number 3802 can be used to retrieve the associated tracking number and the end-user 3202 can be directed to a screen or website 3324 showing the current shipment status (e.g., in transit, arrived, etc.). As each milestone 4002 in the repair process is completed, an icon 4004 associated with the milestone 902 is changed to reflect completion. (0148). Further, claim 9 is directed to the abstract idea of managing a repair workflow through information collection, record creation, tracking, and presentation, which constitutes a combination of organizing human activity, mental processes, and data management. The claim recites collecting product-identifying information, generating a machine-readable code containing identification and tracking data, transmitting that code to a repair system, verifying the product and matching it to stored records, assigning the product to a technician, tracking repair progress through milestone indicators, comparing repair and replacement cost to determine an action, optionally determining a discount code, and returning the repaired product. These recited steps describe receiving, storing, managing, and updating repair-case information, determining workflow status, deciding whether to repair or replace based on pricing rules, and communicating that information to the customer, these activities characteristic of traditional service-center processes performed mentally or manually, and thus squarely within abstract ideas such as organizing human repair activity, evaluation of conditions, and presenting information. The claim uses generic computing components (a mobile device, server, scanning module, transmission module) as mere tools to implement a conventional repair-management business process without any technological improvement. The steps under its broadest reasonable interpretation specifically fall under merely collecting product information from an image, comparing it to a database, and presenting resulting product listings for user selection. This amounts to data gathering, processing, and displaying of information, which is an instance of organizing human activity and a mental process. The Examiner notes that although the claim limitations are summarized, the analysis regarding subject matter eligibility considers the entirety of the claim and all of the claim elements individually, as a whole, and in ordered combination.
And the dependent claims 2-8 and 10-15 recites an abstract idea of collecting, analyzing, organizing, and presenting product-related information. Claim 2 recites limiting the type of consumer product to a medical device or power tool i.e. the abstract troubleshooting and repair-management process. Claim 3 recites preprocessing the image by denoising and converting it to grayscale i.e. routine mathematical image-processing steps. Claim 4 recites ranking error messages based on the number of matching results retrieved from a database, which is merely a mathematical comparison and information-evaluation process, Claim 5 recites that correction instructions may be videos or slideshows, which merely specifies the format of information provided to the user. Claim 6 introduces a product-recommendation module, which evaluates information and recommends further actions. Claim 7 recites selecting a replacement product based on user details and product details, which constitutes rules-based decision making and recommendation generation. Claim 8 recites storing replacement products in a database, which is directed to the abstract idea of storing, categorizing, and retrieving information in a data repository. Claim 10 recites displaying repair-progress milestones, which is simply presenting information to a user about the state of a repair case. Claim 11 recites that the user can select a milestone marker to view more details, which is a routine interactive information-presentation feature. Claim 12 recites displaying a unique identification number along with milestones, which is merely associating and presenting stored information. Claim 13 recites recommending replacement when repair price exceeds replacement price, which is a conventional economic decision rule. Claim 14 uses a database mapping products to replacements. Claim 15 recites determining a discount code based on user status or repair price, which is an abstract economic decision rule. Each of these operations constitutes data evaluation, comparison, or information organization, which is an abstract ideas under the category of “methods of organizing human activity” (i.e. marketing, sales optimization) and “mental processes”.
As such, the Examiner concludes that claim 1 recites an abstract idea (Step 2A – Prong One: YES).
Step 2A - Prong Two: In prong two of step 2A, an evaluation is made whether a claim recites any additional element, or combination of additional elements, that integrate the exception into a practical application of that exception. An “addition element” is an element that is recited in the claim in addition to (beyond) the judicial exception (i.e., an element/limitation that sets forth an abstract idea is not an additional element). The phrase “integration into a practical application” is defined as requiring an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception.
The requirement to execute the claimed steps/functions using a mobile device, a server, scanning module, image analysis module, error detection module, transmission module, etc. (Claims 1 and 9) is/are equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer.
Similarly, the limitations of using a mobile device, a server, scanning module, image analysis module, error detection module, transmission module, etc. (Independent Claims 1 and 9, and dependent claims 2-8 and 10-15) are recited at a high level of generality and amount to no more than mere instructions to apply the exception using generic computer components. This/these limitation(s) do/does not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application (see MPEP 2106.05(f)).
Further, the additional limitations beyond the abstract idea identified above, serves merely to generally link the use of the judicial exception to a particular technological environment or field of use. Specifically, it/they serve(s) to limit the application of the abstract idea to computerized environments (e.g., scanning a image, extracting keypoints, determining, transmitting, generating, verifying, assigning, tracking, returning, etc. steps performed by a mobile device, a server, scanning module, image analysis module, error detection module, transmission module, etc.). This reasoning was demonstrated in Intellectual Ventures I LLC v. Capital One Bank (Fed. Cir. 2015), where the court determined "an abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment, such as the Internet [or] a computer"). This/these limitation(s) do/does not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application (see MPEP 2106.05(h)).
The recited additional element(s) of scanning an image of the consumer product, collecting identifying information related to the image, identifying information, transmitting the plurality of error messages to the mobile application, assigning the consumer product, returning the repaired consumer product (Independent Claims 1 and 9), additionally and/or alternatively simply append insignificant extra-solution activity to the judicial exception, (e.g., mere pre-solution activity, such as data gathering, in conjunction with an abstract idea). This/these limitation(s) do/does not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application. (See MPEP 2106.05(g)).
Dependent claims 2-8 and 10-15 fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims is/are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e., they are part of the abstract idea recited in each respective claim).
The Examiner has therefore determined that the additional elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claim(s) is/are directed to an abstract idea (Step 2A – Prong two: NO).
Step 2B: In step 2B, the claims are analyzed to determine whether any additional element, or combination of additional elements, is/are sufficient to ensure that the claims amount to significantly more than the judicial exception. This analysis is also termed a search for an "inventive concept." An "inventive concept" is furnished by an element or combination of elements that is recited in the claim in addition to (beyond) the judicial exception, and is sufficient to ensure that the claim as a whole amounts to significantly more than the judicial exception itself. Alice Corp., 134 S. Ct. at 2355, 110 USPQ2d at 1981 (citing Mayo, 566 U.S. at 72-73, 101 USPQ2d at 1966).
As discussed above in “Step 2A – Prong 2”, the identified additional elements in Independent Claims 1 and 9, and dependent claims 2-8 and 10-15 are equivalent to adding the words “apply it” on a generic computer, and/or generally link the use of the judicial exception to a particular technological environment or field of use. Therefore, the claims as a whole do not amount to significantly more than the judicial exception itself.
The recited additional element(s) of scanning an image of the consumer product, collecting identifying information related to the image, identifying information, transmitting the plurality of error messages to the mobile application, assigning the consumer product, returning the repaired consumer product (Claim(s) 1 and 9), additionally and/or alternatively simply append insignificant extra-solution activity to the judicial exception, (e.g., mere pre-solution activity, such as data gathering, in conjunction with an abstract idea) i.e. receiving an input message and transmitting a responsive message is similar to “Receiving or transmitting data over a network, e.g., using the Internet to gather data”, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information), “Storing and retrieving information in memory”, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; “Presenting offers to potential customers and gathering statistics generated based on the testing about how potential customers responded to the offers; the statistics are then used to calculate an optimized price”, OIP Technologies, 788 F.3d at 1363, 115 USPQ2d at 1092-93, is a well-understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here) (See MPEP 2106.05(d) (II)).
This conclusion is based on a factual determination. Applicant’s own disclosure at [0137] acknowledges that “repair system 3214 also controls the various application services used to manage repair management system 3200. Application services 3216 may include but are not limited to, customer data collection system 3218, shipping label generator & tracker 3220, notification system 3222, messaging system 3224, financial transaction system 3226, product registration system 3228, repair update and tracking system 3230, and AI module 3232.” (i.e., conventional nature of receiving and transmitting data/messages over a network). This additional element therefore do not ensure the claim amounts to significantly more than the abstract idea.
Viewing the additional limitations in combination also shows that they fail to ensure the claims amount to significantly more than the abstract idea. When considered as an ordered combination, the additional components of the claims add nothing that is not already present when considered separately, and thus simply append the abstract idea with words equivalent to “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer or/and append the abstract idea with insignificant extra solution activity associated with the implementation of the judicial exception, (e.g., mere data gathering, post-solution activity) and/or simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception.
The dependent claims 2-8 and 10-15 fail to include any additional elements. In other words, each of the limitations/elements recited in respective independent claims is/are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e., they are part of the abstract idea recited in each respective claim).
Claims 2-8 add only field-of-use restrictions, routine image-processing steps, basic data ranking, and conventional recommendation or database-mapping functions, all of which merely refine the underlying abstract data-analysis concept without adding any meaningful technological improvement. Likewise, claims 10–15 add generic user-interface actions, workflow-tracking indicators, record-association features, economic decision rules, and discount determination logic, it is recited at a high level of generality and does not integrate the judicial exception into a practical application.
The Examiner has therefore determined that no additional element, or combination of additional claims elements is/are sufficient to ensure the claim(s) amount to significantly more than the abstract idea identified above (Step 2B: NO).
Therefore, claims 1-15 are not eligible subject matter under 35 USC 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status:
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
Determining the scope and contents of the prior art.
Ascertaining the differences between the prior art and the claims at issue.
Resolving the level of ordinary skill in the pertinent art.
Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-8 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. 20250078514 (“Hutchens”) in view of U.S. Pub. 20220139541 (“Korst”).
As per claims 1, Hutchens discloses, a mobile application executed on a mobile device having a processor, the mobile application comprising (“systems may also be used by certain users, operators, or administrators to assist in a determining if new items or tools may need to be ordered or repaired. In addition, such systems and methods may be configured to assist or play a role in the ordering of new items and equipment if it is determined that new items need to be ordered.”) (0005): a scanning module for scanning an image of the consumer product by an end- user (Examiner notes a camera system (first camera 410) configured to capture images of a hand tool, vial, medicament container, or medical supply, thereby providing a scanning module that captures images of the consumer product) (“first camera 410 may be configured so as to capture one or more images of an item to be identified, such as a hand tool. In addition, the first camera 410 may be configured so as to capture one or more images of certain text provided on an item to be identified, such as a label provided on a medicament vial or a medicament container or a medical supply.”) (0052, 0081-0085);
an image analysis module for extracting keypoints in the image (Examiner notes that CNN-based image recognition engine (AI engine 580) that performs feature extraction, including edges, shapes, text, and patterns, which corresponds to extracting keypoints from the captured image) (“the data processing unit 115 and its corresponding AI engine for image recognition may be used to process data to determine one or a plurality of inventory control parameters. As just one example, the data processing unit 115 may be configured to determine an inventory condition of the first item stored in the first moveable drawer 120 … camera-based machine learning system 520 that uses a convolutional neural network (CNN) AI engine 580 to learn different types of tools and/or different types of labeled items within a drawer of a toolbox 540, such as the toolbox illustrated in FIGS. 1-4. The CNN AI engine 580 comprises a deep learning neural network that performs certain tool image recognition and/or text recognition tasks. In this arrangement, the CNN AI engine 580 can be trained to recognize different types of tools by processing images of one or more tools that are contained with a tool trays contained within each toolbox drawer of the toolbox 540. In this arrangement, the CNN AI engine 580 can also be trained to recognize different types of text provided on various items by processing images of one or more labels that are provided by items contained within each toolbox drawer of the toolbox 540”) (0087-0090);
an error detection module for determining a ranking of plurality of error messages based on the keypoints and the image (Examiner notes that using cloud-based object detection with confidence levels and corrective data cleaning to determine the presence and likelihood of different detected conditions, corresponding to determining error messages based on image-derived keypoints) (“the cloud-based image identification services can identify and localize multiple objects within an image using a technique called object detection. By leveraging the learned features, the system can detect and classify different objects present in the image. It can provide bounding boxes around each detected object and assign labels to them, indicating the recognized object's category” and “The labeled data may contain errors, missing values, or other issues that need to be corrected before this data can be used for training the AI engine 580. As just one example, this data cleaning step may involve removing outliers, filling in missing values, or correcting errors in the data”) (0108 and 0159, 0148);
wherein the error messages are stored in a product database (Examiner notes storing classification results and detected conditions in a cloud-based inventory system, with post-processing steps enabling error correction at the segment level, corresponding to error messages and corrective information stored in a database) (“the provider or supplier of the inventory control system may retain the AI engine internally, allowing the provider or supplier to make potential inventory control system layout or inventory item (e.g., tool) future changes that the end user may desire or make. In one preferred arrangement, these changes or modifications may be made wirelessly. Integrating an AI engine into a wireless or cloud based inventory control system can bring numerous benefits, especially when it comes to tracking and managing inventory with precision and flexibility”) (0142-0143) associating each of the plurality of error message with a correction instruction (“Saved segments can undergo additional post-processing steps, such as further segmentation, filtering, or enhancement, to improve recognition performance. Segmentation allows for targeted error correction. If a segment is misclassified or poorly processed, it can be individually corrected without affecting the rest of the image.”) (0372):
and a transmission module for transmitting the plurality of error messages to the mobile application (Examiner notes transmitting processed image results and pixel-level diagnostics to a user interface on a web or mobile platform, corresponding to transmitting error messages to the mobile application) (“A pixel checker comprises a tool or software that checks the quality of individual pixels or groups of pixels in a tool image. This is often done to identify any defects, errors, or inconsistencies in the image or video. The purpose of a pixel checker is to ensure that the image or video is of high quality and to identify any issues that need to be corrected” and “a graphical or a textual interface can be utilized to display the data and the results to the user, using elements such as labels, buttons, text boxes, and images. In one arrangement, a web or a mobile platform can be utilized to make the user interface accessible and interactive”) (0185 and 0341):
wherein the mobile application displays, to the end-user, a first error message from the plurality of error messages along with an associated correction instruction from the product database (Examiner notes detected objects to database entries, identifying false positives/negatives, and extracting SIFT/SURF/ORB features to display corrected classification results, thereby teaching display of error messages with associated corrective information to the user) (“process step of matching detected and classified objects to known objects helps validate the accuracy of the model. This comparison can help identify false positives (i.e., incorrectly detected objects) and false negatives (i.e., missed detections), allowing for further refinement and improvement of the model. In addition, by cross-referencing with a database of known objects, the model can improve its classification accuracy. If the model's initial classification is uncertain, the database can provide additional context and information to make a more accurate prediction” and “matching process step 943 involves extracting detailed features from the detected object. These features could include edges, key points, textures, shapes, and specific patterns that are unique to the object. In addition, descriptors for the extracted features are calculated. Descriptors comprise numerical values that represent the unique characteristics of the features. Common methods include SIFT (Scale-Invariant Feature Transform), SURF (Speeded-Up Robust Features), ORB (Oriented FAST and Rotated BRIEF), and others”) (0403 and 0425-0426);
Hutchens specifically doesn’t disclose, system for managing and diagnosing repairs of a consumer product comprising and a repair application executed on a server, the repair application comprising, however Korst discloses, system for managing and diagnosing repairs of a consumer product comprising (“The service device 102 can be a personal device, such as a mobile computer system such as a laptop or smart device. In other embodiments, the service device 102 may be an imaging system controller or computer integral with or operatively connected with the imaging device undergoing service (e.g., at a medical facility). As another example, the service device 102 may be a portable computer (e.g. notebook computer, tablet computer, or so forth) carried by a FSE performing diagnosis of a fault with the imaging device and ordering of parts. In another example, the service device 102 may be the controller computer of the imaging device under service, or a computer based at the hospital”) (0030):
and a repair application executed on a server, the repair application comprising (“FIG. 1, the servicing information collected using a service call reporting app 108 is fed to a database backend 110 (e.g., implemented at a medical facility or other remote center from where the FSE is performing the service call, or at the imaging device vendor or other servicing contractor). For example, the database backend 110 may implement a service log for the medical imaging device … The parts ordering system 122 utilizes the display device 105 and the at least one user input device 103. The parts ordering system 122 can then be used by the FSE to identify (and optionally actually order) replacement parts for the medical device by PTNs 134, 136 retrieved from the PTN database 130”) (0031-0036):
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention for a client application a module scans images of consumer products; an analysis module extracts keypoints, a server repair application ranks error messages using keypoints, relates them to corrections in a database, and transmits messages to a mobile app, displaying the first error message and its correction, as disclosed by Hutchens, system for managing and diagnosing repairs of a consumer product comprising and a repair application executed on a server, the repair application comprising, as taught by Korst for the purpose to providing an efficient tool for receiving information on extracted parts during the ordering replacement parts for a medical device or other complex equipment undergoing service.
As per claims 2, Hutchens discloses, wherein the consumer product is a medical device or a power tool (Examiner notes a camera system (first camera 410) configured to capture images of a hand tool, vial, medicament container, or medical supply, thereby providing a scanning module that captures images of the consumer product) (“AI based inventory control system for monitoring a status of a first item, such as a tool, an instrument, a utensil, a medical device, medical supplies, an appliance, or a hand-tool”) (0041, 0081-0085).
As per claims 3, Hutchens discloses, wherein the image analysis module denoises and grayscales the image prior to extracting keypoints (Examiner notes a camera system (first camera 410) configured to capture images of a hand tool, vial, medicament container, or medical supply, thereby providing a scanning module that captures images of the consumer product) (“AI based inventory control system orients and denoise the image … AI based inventory control system may also denoise the image data set. Denoising the plurality of images relates to the removal or reduction of the noise that might be present in the images due to low-quality cameras, poor lighting, or compression artifacts. Noise can degrade the quality and clarity of these images, which can affect the performance and accuracy of the computer vision model. By denoising these images, the contrast and sharpness of these images can be enhanced, which can help the computer vision model to extract more meaningful features from the dataset images”) (0290-0291).
As per claims 4, Hutchens discloses, wherein ranking of the plurality of error messages is based on a number of results found in the product database (Examiner notes a camera system (first camera 410) configured to capture images of a hand tool, vial, medicament container, or medical supply, thereby providing a scanning module that captures images of the consumer product) (“feature matching can identify and correct errors in earlier stages of the detection process. If an object was misclassified due to similarities with other objects, feature matching can provide a more accurate identification. By using feature matching, the model becomes more robust to variations in lighting, angle, scale, and other environmental factors” and “identifying instances where the model might be uncertain or prone to errors. In addition, reviewing confidence levels can pinpoint false positives and false negatives, enabling targeted improvements in the model. Moreover, analyzing confidence reports can provide feedback that helps in adjusting and retraining the model. For instance, segments or objects with low confidence scores can be flagged for further review and included in additional training cycles. And the system can adapt to varying conditions and contexts by dynamically adjusting its decision-making thresholds based on confidence levels”) (0423 and 0468).
As per claims 5, Hutchens discloses, wherein the associated correction instruction is a video or a plurality of images arranged in a slideshow (Examiner notes a camera system (first camera 410) configured to capture images of a hand tool, vial, medicament container, or medical supply, thereby providing a scanning module that captures images of the consumer product) (“the image detection process 600 may utilize a pixel checker. A pixel checker comprises a tool or software that checks the quality of individual pixels or groups of pixels in a tool image. This is often done to identify any defects, errors, or inconsistencies in the image or video. The purpose of a pixel checker is to ensure that the image or video is of high quality and to identify any issues that need to be corrected”) (0185-0189, 0308).
As per claims 6, Hutchens discloses, wherein the repair application further comprises: a product recommendation module for recommending a replacement product for the consumer product based on the first error message, wherein the first error message indicates that the consumer product is not repairable (Examiner notes a camera system (first camera 410) configured to capture images of a hand tool, vial, medicament container, or medical supply, thereby providing a scanning module that captures images of the consumer product) (“Broken or deformed tool detection can be useful for applications such as inventory management, quality control, or maintenance. For instance, if we want to keep track of the number and condition of the tools in a warehouse, the computer vision model can scan the images of the various tools in a toolbox, and detect any tools that are missing, broken, or deformed. Or, the computer vison model can check if the tools are safe and functional before using them, wherein the model can be used to inspect the images of the tools and detect any defects or damages that might affect their performance or reliability. Or, if it is desired to repair or replace the tools that are broken or deformed, the model can be used to identify the type and severity of the problem and suggest the appropriate action or solution”) (0314, 0333).
As per claims 7, Hutchens discloses, wherein the product recommendation module selects the replacement product from a plurality of replacement products based on the consumer product and details of the end-user (Examiner notes a camera system (first camera 410) configured to capture images of a hand tool, vial, medicament container, or medical supply, thereby providing a scanning module that captures images of the consumer product) (“Check if the tools that model has detected are broken or deformed, and if so, how severely. If the tools are beyond repair, these items can marked as unusable and removed from the inventory. If the tools can be fixed, these can be mark as damaged and then sent or processed for proper maintenance or follow-up. The AI based inventory control system can also be updated with the number and type of tools that need to be replaced or repaired”) (0333, 0314).
As per claims 8, Hutchens discloses, wherein the plurality of replacement products are stored in a product recommendation database associating a plurality of consumer products with the plurality of replacement products (Examiner notes a camera system (first camera 410) configured to capture images of a hand tool, vial, medicament container, or medical supply, thereby providing a scanning module that captures images of the consumer product) (“With accurate inventory data, businesses can make better-informed decisions regarding stock levels, ordering, and logistics … the provider or supplier of the inventory control system may retain the AI engine internally, allowing the provider or supplier to make potential inventory control system layout or inventory item (e.g., tool) future changes that the end user may desire or make. In one preferred arrangement, these changes or modifications may be made wirelessly. Integrating an AI engine into a wireless or cloud based inventory control system can bring numerous benefits, especially when it comes to tracking and managing inventory with precision and flexibility”) (0142-0143).
Claims 9-14 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. 20250078514 (“Hutchens”) in view of U.S. Pub. 20220139541 (“Korst”) in further view U.S. Pub. US 20180308044 (“McCullough”).
As per claims 9, Hutchens discloses, a mobile application executed on a mobile device having a processor, the mobile application comprising (“systems may also be used by certain users, operators, or administrators to assist in a determining if new items or tools may need to be ordered or repaired. In addition, such systems and methods may be configured to assist or play a role in the ordering of new items and equipment if it is determined that new items need to be ordered.”) (0005): a scanning module for scanning an image of the consumer product by an end-user (Examiner notes a camera system (first camera 410) configured to capture images of a hand tool, vial, medicament container, or medical supply, thereby providing a scanning module that captures images of the consumer product) (“first camera 410 may be configured so as to capture one or more images of an item to be identified, such as a hand tool. In addition, the first camera 410 may be configured so as to capture one or more images of certain text provided on an item to be identified, such as a label provided on a medicament vial or a medicament container or a medical supply.”) (0052, 0081-0085);
an information acquisition module for collecting identifying information related to the image (Examiner notes that CNN-based image recognition engine (AI engine 580) that performs feature extraction, including edges, shapes, text, and patterns, which corresponds to extracting keypoints from the captured image) (“the data processing unit 115 and its corresponding AI engine for image recognition may be used to process data to determine one or a plurality of inventory control parameters. As just one example, the data processing unit 115 may be configured to determine an inventory condition of the first item stored in the first moveable drawer 120 … camera-based machine learning system 520 that uses a convolutional neural network (CNN) AI engine 580 to learn different types of tools and/or different types of labeled items within a drawer of a toolbox 540, such as the toolbox illustrated in FIGS. 1-4. The CNN AI engine 580 comprises a deep learning neural network that performs certain tool image recognition and/or text recognition tasks. In this arrangement, the CNN AI engine 580 can be trained to recognize different types of tools by processing images of one or more tools that are contained with a tool trays contained within each toolbox drawer of the toolbox 540. In this arrangement, the CNN AI engine 580 can also be trained to recognize different types of text provided on various items by processing images of one or more labels that are provided by items contained within each toolbox drawer of the toolbox 540”) (0087-0090);
a technician module for assigning the consumer product to a technician for repairs (“systems may also be used to record or monitor which items are being used and for how long these items are being used, in case certain such items will need maintenance or need for recalibration. Such systems and methods may also monitor and record which items are being used by whom, such as a particular person, technician, or mechanic” and “herein the model can be used to inspect the images of the tools and detect any defects or damages that might affect their performance or reliability. Or, if it is desired to repair or replace the tools that are broken or deformed, the model can be used to identify the type and severity of the problem and suggest the appropriate action or solution.”) (0004 and 0314).
Hutchens specifically doesn’t disclose, system for managing and diagnosing repairs of a consumer product comprising and a repair application executed on a server, the repair application comprising, however Korst discloses, system for managing repairs of a consumer product comprising (“The service device 102 can be a personal device, such as a mobile computer system such as a laptop or smart device. In other embodiments, the service device 102 may be an imaging system controller or computer integral with or operatively connected with the imaging device undergoing service (e.g., at a medical facility). As another example, the service device 102 may be a portable computer (e.g. notebook computer, tablet computer, or so forth) carried by a FSE performing diagnosis of a fault with the imaging device and ordering of parts. In another example, the service device 102 may be the controller computer of the imaging device under service, or a computer based at the hospital”) (0030):
and a repair system for receiving the consumer product, the repair system comprising (“FIG. 1, the servicing information collected using a service call reporting app 108 is fed to a database backend 110 (e.g., implemented at a medical facility or other remote center from where the FSE is performing the service call, or at the imaging device vendor or other servicing contractor). For example, the database backend 110 may implement a service log for the medical imaging device … The parts ordering system 122 utilizes the display device 105 and the at least one user input device 103. The parts ordering system 122 can then be used by the FSE to identify (and optionally actually order) replacement parts for the medical device by PTNs 134, 136 retrieved from the PTN database 130”) (0031-0036).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention a mobile application designed for execution on a mobile device featuring a processor, the mobile application comprising: a scanning module for scanning an image of the consumer product by an end-user; an information acquisition module for collecting identifying information related to the image; a technician module for assigning the consumer product to a technician for repairs, as disclosed by Hutchens, system for managing and diagnosing repairs of a consumer product comprising and a repair application executed on a server, the repair application comprising, as taught by Korst for the purpose to providing an efficient tool for receiving information on extracted parts during the ordering replacement parts for a medical device or other complex equipment undergoing service.
Hutchens specifically doesn’t disclose, a code creation and transmission module for generating a unique machine-readable code with identification and tracking information, connected to the product packaging; a repair system for scanning and verifying the machine-readable code, and returning the repaired product to the end-user, however McCullough discloses, a code creation module for creating a unique machine-readable code including a unique identification number, the identifying information and a tracking number, wherein an image of the unique machine-readable code is coupled to packaging housing the consumer product (“Inventory for receipt and put-away is moved to the sterile pack stocking location in the hospital and set in a physical location away from all existing stock. The aforementioned cycle count procedure is completed for all OEM's product … Utilizing the application, the auditor will create a receipt and captures key information from the packing slip (i.e.: ASN #, Order #, ship date, and tracking number). Using the mobile computing device, the auditor scans the Product Number, the auditor scans the Lot Number, the auditor enters a quantity of (e.g., 1), the auditor puts product onto the shelf, and the auditor will proceed to the next package to be received and processes”) (0092-0093);
a transmission module for transmitting the unique machine-readable code to a repair system (“The digital image capture is effective in capturing the bar codes on the items because boxes of implants are generally stacked on mobile racks, as depicted in FIG. 28. The described method therefore enables an auditor (operating the described mobile device) to take a picture of a shelf section. Upon obtaining the image, the software will then convert the bar code information for each item in the picture to its correct data representation for each package in the section, and store this information. The information may be stored locally on the device, and/or may be transmitted to a centralized Analytics Engine”) (0154);
a receiving module for scanning the machine-readable code coupled to the packaging and extracting the unique identification number, the identifying information, and the tracking number (“Auditor will create a receipt and captures key information from the packing slip (i.e.: ASN #, Order #, ship date, and tracking number). The user can then begin processing parts, which includes: entering the OEM Name, Part Number and Quantity for each piece received, placing each piece into its appropriate bin, and closing out the Receipt Package”) (0098);
and a repair module for matching the unique identification number to the received unique machine-readable code to verify receipt of the consumer product (“In order to initiate the Receipt process, a user unpacks and/or opens the box that was retrieved from the inbound OEM stocking location in the hospital facility. The Auditor (user) will pull up the application and launch the receipt application on his or her mobile device. The Auditor will create a receipt and captures key information from the packing slip (i.e.: ASN #, Order #, ship date, and tracking number). The user can then begin processing parts, which includes: entering the OEM Name, Part Number and Quantity for each piece received, placing each piece into its appropriate bin, and closing out the Receipt Package”) (0098, 0131);
and a return module for returning the repaired consumer product to the end-user (“The user (e.g., an auditor) will pull up the application on the mobile computing device and launch the receipt application. Utilizing the application, the auditor will create a receipt and captures key information from the packing slip (i.e.: ASN #, Order #, ship date, and tracking number). Using the mobile computing device, the auditor scans the Product Number, the auditor scans the Lot Number, the auditor enters a quantity of (e.g., 1), the auditor puts product onto the shelf, and the auditor will proceed to the next package to be received and processes. When complete, the Auditor will close out the receipt package. Upon obtaining information that the receipt package was closed by the user, program code executing on the mobile computing device will n synchronize its local data with that on the server. The server obtains the inventory and creates a receipt “package” or event.”) (0093).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention for a mobile application designed for execution on a mobile device featuring a processor, the mobile application comprising: a scanning module for scanning an image of the consumer product by an end-user; an information acquisition module for collecting identifying information related to the image; a technician module for assigning the consumer product to a technician for repairs, as disclosed by Hutchens, a code creation and transmission module for generating a unique machine-readable code with identification and tracking information, connected to the product packaging; a repair system for scanning and verifying the machine-readable code, and returning the repaired product to the end-user, as taught by McCullough for the purpose to improve the ability of consumers to manage inventory. Aspects of the present invention perform an improved service offering (greater functionality, greater information visibility, improved business decision making) at a much lower cost than the current industry solution for industries, including but not limited to, the medical device industry.
As per claims 10, Hutchens discloses, wherein the mobile application further comprises: an information update system for receiving a plurality of repair milestones from the repair system (“FIG. 1 illustrates a perspective view of an AI based inventory control system 100 according to an exemplary embodiment. As illustrated, the AI based inventory control system 100 comprises an AI based inventory control system for monitoring a status of a first item, such as a tool, an instrument, a utensil, a medical device, medical supplies, an appliance, or a hand-tool. As illustrated, the AI based inventory control system comprises an AI based inventory control system for monitoring and tracking a status of one or multiple items in one or multiple drawers of a container, such as a container 10 comprising multiple drawers, such as a toolbox for securely retaining a plurality of items.”) (0041), wherein the information update system displays a plurality of milestone markers, wherein each milestone marker is associated with a different step of the repair of the consumer product by the technician, and wherein the milestone marker is changed from a first icon to a second icon upon completion of each step of the repair process associated with the milestone marker (“computer vision model has provided about the tools in the image and compare it with the data that is stored in the inventory database. This way, the inventory records can be updated and the user interface with the latest status and condition of the tools revised (if necessary). For example, the inventory control systems as disclosed and described herein can perform the follow … if the tools that the model has detected match the tools that are expected to have in the present system's inventory. If there are any discrepancies, such as missing, extra, or wrong tools, these can be flagged, and the proper notification can then be generated”) (0331-0332).
As per claims 11, Hutchens discloses, wherein the end-user can select a milestone marker from the plurality of milestone markers to display additional information (“a graphical or a textual interface to show the user the current inventory status, the detected tools, and the actions that need to be taken. The AI based inventory control system can included, change or modify color codes, icons, or charts to highlight the important or urgent hand tool information. The control system can also provide the user with the option to confirm, edit, or cancel the changes that the system proposes”) (0335, 0331-0332).
As per claims 12, Hutchens discloses, wherein in information update system displays the unique identification number in association with the plurality of milestone markers (“process continues to step 930 which involves running an object detection engine. This step involves using the model to identify and locate objects in the image dataset that has been preprocessed to improve its quality and consistency. To detect an object in an image, in one preferred arrangement, the computer vision model divides the image into a grid of cells and predicts the bounding boxes, class labels, and confidence scores for each cell. In one preferred arrangement, the bounding box comprises a rectangle that encloses the object, the class label is the name of the object, and the confidence score is the probability that the prediction is correct. The computer vision model can detect multiple objects of different classes in the same image, such as screwdrivers, flashlights, pliers, and other types of hand tools”) (0397).
As per claims 13, Hutchens discloses, wherein the repair system further comprises: a product recommendation module for recommending a replacement product for the consumer product if a repair price is greater than the replacement product (“computer vison model can check if the tools are safe and functional before using them, wherein the model can be used to inspect the images of the tools and detect any defects or damages that might affect their performance or reliability. Or, if it is desired to repair or replace the tools that are broken or deformed, the model can be used to identify the type and severity of the problem and suggest the appropriate action or solution”) (0314).
As per claims 14, Hutchens discloses, wherein the replacement product is selected from a consumer product database associating consumer products with at least one replacement product (“computer vison model can check if the tools are safe and functional before using them, wherein the model can be used to inspect the images of the tools and detect any defects or damages that might affect their performance or reliability. Or, if it is desired to repair or replace the tools that are broken or deformed, the model can be used to identify the type and severity of the problem and suggest the appropriate action or solution”) (0314).
Claims 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. 20250078514 (“Hutchens”) in view of U.S. Pub. 20220139541 (“Korst”) in further view U.S. Pub. US 20180308044 (“McCullough”) in further view U.S. Pub. US 20160371639 (“Smith”).
As per claims 15, Hutchens specifically doesn’t disclose, wherein a discount code is selected for the end- user for the replacement product based upon a current status of the end-user or the repair price, however Smith discloses, wherein a discount code is selected for the end- user for the replacement product based upon a current status of the end-user or the repair price (“one or more of purchase orders, requisitions, receipts, financial documents, or the like. Revenue data includes one or more of sales price, discounts, rebates, financial and lending data, and other such information. The revenue data is sorted into categories correlating to one or more of the representative, facility, case, doctor, a specific medical asset, or the like”) (0068).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention for a mobile application designed for execution on a mobile device featuring a processor, the mobile application comprising: a scanning module for scanning an image of the consumer product by an end-user; an information acquisition module for collecting identifying information related to the image; a technician module for assigning the consumer product to a technician for repairs, as disclosed by Hutchens, wherein a discount code is selected for the end- user for the replacement product based upon a current status of the end-user or the repair price, as taught by Smith for the purpose to rack inventory are limited to tracking shipment information of each tray. The shipping information is accessed via the courier's website, and the smart phone application acts as a reporting tool showing the user the shipping status of the shipment.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US. Pub. 20240161918 (“Pavek”).
Pavek discloses, healthcare providers utilize many different tools, instruments, and supplies. Asset tracking systems are available that allow healthcare providers to track such assets. Tracking of assets is more than just inventory and location tracking. It also involves managing and tracking the status of assets as they move through various workflows in order to ensure that the assets will be available and ready when needed.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GAUTAM UBALE whose telephone number is (571)272-9861. The examiner can normally be reached Mon-Fri. 7:00 AM- 6:30 PM PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GAUTAM UBALE/Primary Examiner, Art Unit 3689