Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is a final rejection
Claims 1-20, 41, 43 were cancelled
Claims 21-40, 42, 44-45 are pending
Claims 21, 31, 39 were amended
Claims 21-40, 42, 44-45 are rejected under 35 USC § 101
Claims 21-40, 42, 44-45 are rejected under 35 USC § 103
Priority
Acknowledgement is made of Applicant’s claim for a domestic priority date of 9-20-2018
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 21-40, 42, 44-45 are not patent eligible because the claimed invention is directed to an abstract idea without significantly more.
Analysis
First, claims are directed to one or more of the following statutory categories: a process, a machine, a manufacture, and a composition of matter. Regarding claims 21-40, 42, 44-45 the claims recite an abstract idea of “determining characteristics of possible customers in order to optimally targeting them with insurance services”.
Independent Claims 21, 31, 39 are rejected under 35 U.S.C 101 based on the following analysis.
-Step 1 (Does the claim fall within a statutory category? YES): claim 21, 31, 39 recite an apparatus, device and method of determining characteristics of possible customers in order to better target them with insurance services.
-Step 2A Prong One (Does the claim fall within at least one of the groupings of abstract ideas?: YES): The claimed invention:
Receive/transmit, a first signal comprising image data that identifies a plurality of individuals
perform operations that parse the received image data extract a plurality of first image data elements from the image data, and associate each of the extracted first image data elements an identifier assigned to a face of a corresponding one of the plurality of individuals wherein the extracted first image data element associated with each assigned identifier corresponds to one of the plurality of individuals
input data that includes the extracted first image data elements generate output data comprising a value of a first characteristic associated with eachplurality of individuals,
based on the values of the first characteristic associated with at least two individuals of the plurality of individuals, perform operations that generate elements of relationship data characterizing a structure of a familial relationship between the at least two individuals;
generate candidate parameter values for the exchange of data based on the values
and transmit the candidate parameter values …, the candidate parameter values representing discrete elements of a policy associated with the of data, present at least a portion of the candidate parameter values … associated with the policy;
belong to the grouping of mental processes under concepts performed in the human mind (including an observation, evaluation, judgement, opinion) as it recites determining characteristics of possible customers in order to optimally targeting them with insurance services. Alternatively the claim belongs to the grouping of certain methods of organizing human activity under fundamental economic practices (including insurance) as it recites “determining characteristics of possible customers in order to optimally targeting them with insurance services”. (refer to MPP 2106.04(a)(2)). Accordingly this claim recites an abstract idea.
-Step 2A Prong Two (Are there additional elements in the claim that imposes a meaningful limit on the abstract idea? NO).
Claims 21, 31, 39 recite:
a device
apply a trained machine learning process
application of the trained machine learning process
digital interface
Claim 21 recites:
a communications unit;
a memory storing instructions; and
at least one processor coupled to the communications unit and to the memory, the at least one processor being configured to execute the instructions
Claim 39 recites:
a display unit;
a communications unit;
a memory storing instructions;
at least one processor coupled to the display unit, to the communications unit, and to the memory, the at least one processor being configured to execute the instructions.
Computing system.
Amounting to mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea. (refer to MPEP 2106.05(f)). Accordingly, the claim as a whole does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
-Step 2B (Does the additional elements of the claim provide an inventive concept?: NO. As discussed previously with respect to Step 2A Prong Two,
Claims 21, 31, 39 recite:
Claims 21, 31, 39 recite:
a device
apply a trained machine learning process
application of the trained machine learning process
digital interface
Claim 21 recites:
a communications unit;
a memory storing instructions; and
at least one processor coupled to the communications unit and to the memory, the at least one processor being configured to execute the instructions
Claim 39 recites:
a display unit;
a communications unit;
a memory storing instructions;
at least one processor coupled to the display unit, to the communications unit, and to the memory, the at least one processor being configured to execute the instructions;
Computing system.
Amounting to mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea. (refer to MPEP 2106.05(f)) Accordingly, the claim does not provide an inventive concept (significantly more than the abstract idea) and hence the claim is ineligible.
Dependent Claims:
Step 2A Prong One: The following dependent claims recite additional limitations that further define the abstract idea of “determining characteristics of possible customers in order to optimally targeting them with insurance services”. These claim limitations include:
Claims 22 & 32: wherein
the first characteristic comprises
determine the structure of the familial relationship based on the first characteristic values associated with the at least two of the individuals;
generate the candidate parameter values based on portions of the first characteristic values and the determined structure of the familial relationship
Claim 23:
the additional input data comprising (i) the first characteristic values associated with the at least two of the individuals and (ii) one or more of the first image data elements;
based on the additional input data determine a value of a second characteristic associated with the familial relationship, the second characteristic value being consistent with the first characteristic values, and the second characteristic value indicating the structure of the familial relationship; and
generate the candidate parameter values based on portions of the first and second characteristic values.
Claims 24 & 33:
recognize a face of each of the individuals within the image data;
determine at least one first spatial position associated with the each of the recognized faces within the image data
Claims 25 & 34:
decompose the image data into the plurality of the first image data elements based on the first spatial positions, each of the first image data elements being associated with a corresponding one of the recognized faces;
determine the first characteristic value for each of the individuals based on an analysis of the first image data elements, the first characteristic value for each of the individuals comprising an age, a gender, a height, or a weight of a corresponding one the individuals
Claims 26 & 35:
identify one or more facial features within each of the recognized faces based on the image data;
determine second spatial positions associated with the one or more facial features within each of the recognized faces;
generate input data comprising one or more of the first image data elements and at least one of the first spatial positions or the second spatial positions;
based on the input data determine the value of the first characteristic for each of the individuals;
Claims 27 & 36:
recognize a physical object within the image data based on..one or more second elements of the image data; and
generate the candidate parameter values based on the first characteristic values associated with the at least two of the individuals, the determined structure of the familial relationship, and an object type associated with the recognized physical object;
Claim 28: determine the object type associated with the recognized physical object based on the one or more second image data elements
Claims 29 & 37:
present at least the portion of the candidate parameter values;
perform operations that capture the image data or receive the image data from a third-party;
transmit the image data;
Claims 30 & 38: wherein:
the image data identifies the plurality of individuals during a first temporal interval;
generate elements of training data associated with second temporal interval disposed prior to the first temporal interval, the generated elements of training data comprising additional elements of image data identifying the plurality of individuals during the second temporal interval
obtain elements of outcome data comprising characteristics of each of the plurality of individuals during the second temporal interval;
based on the outcome data, generate a value of a metric characterizing an accuracy of the process, and determine that the metric value exceeds a threshold value and
based on the determination that the metric value exceeds the threshold value, apply the process to the elements of the image data that identify the plurality of individuals during the first temporal interval.
Claim 40: further receive at least a portion of the image data:
Claim 42: wherein:
apply an additional, process to the values of the first characteristic associated with at least two of the individuals, and generate the relationship data based on the application of the additional process to the values of the first characteristic associated with at least two of the individuals, the relationship data comprising a value of a second characteristic that indicates the structure of the familial relationship between the at least two individuals; and generate candidate parameter values of an exchange of data based on the values of the first characteristic associated with the at least two of the individuals and on the value of the second characteristic that indicates the structure of the familial relationship between the at least two individuals;
Claim 44:
obtain layout data characterizing … elements;
based on the layout data, perform operations that populate at least a subset of the ..elements with corresponding ones of the candidate parameter values, and store … data characterizing each of the populated .. elements; and
transmit, …, linking data associated with the stored … data, the linking data causing .. to perform operations that access the stored … elements … and present the at least a portion of the populated .. elements
Step 2A Prong Two (Are there additional elements in the claim that imposes a meaningful limit on the abstract idea? NO). The following dependent claims recite mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea. (refer to MPEP 2106.05(f)). Accordingly, the claims as a whole do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims include:
Claims 22 & 32: wherein at least one processor is configured to execute the instructions.
Claim 23: wherein the at least one processor is configured to execute the instructions to:
apply an additional trained machine learning process to additional input data,
the application of the additional trained machine learning process;
Claims 24 & 33: wherein the at least one processor is configured to execute the instructions to: an application of a trained facial recognition process to the image data;
Claims 25 & 34: wherein the at least one processor is configured to execute the instructions;
Claims 26 & 35: wherein the at least one processor is configured to execute the instructions to:
an application of a trained facial recognition process
an application of the trained machine learning process.
Claims 27 & 36: wherein the at least one processor is further configured to execute the instructions to:
an application of a trained object recognition process
Claim 28:
wherein the at least one processor is further configured to execute the instructions … the application of the trained object recognition process
Claims 29 & 37: wherein the device is configured to execute an application program, and the executed application program causing the device to:
within the digital interface;
via a digital camera device;
third-party device;
to the apparatus
computing system
Claims 30 & 38: wherein:
at least one processor is further configured to execute the instructions to:
perform operations that train the machine learning process based on an application of the machine learning process to the generated elements of training data;
trained machine learning
Claim 40: further comprising a digital camera coupled to the at least one processor, the at least one processor being further configured to execute the instructions;
Claim 42: wherein the at least one processor is further configured to execute the instructions:
trained machine learning;
Claim 44: wherein the at least one processor is further configured to execute the instructions to:
interface elements
digital interface;
interface data
portion of the memory; and
the device
communications interface;
Claim 45: wherein the device comprises a smart watch, a wearable device, or a wearable form factor:
Step 2B (Does the additional elements of the claim provide an inventive concept?: NO). As discussed previously with respect to Step 2A Prong Two, the following dependent claims recite mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea. (refer to MPEP 2106.05(f)). Accordingly, the claim does not provide an inventive concept (significantly more than the abstract idea) and hence the claim is ineligible. The claims include:
Claim 5: wherein the word embedding is performed using at least one selected from the group consisting of Word2vec, AdaGram, fastText, and Doc2vec.
Claim 6: restart (RWR) algorithm
Claim 7: wherein the chemical property features are generated through SwissADME
Claim 9: wherein the hidden layers include rectified linear unit (ReLU) and batch normalization functions.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or
non-obviousness.
Claims 21-29, 31- 37, 39-40 are rejected under 35 U.S.C. 103 as being un-patentable by Krebs et.al (US 2018/0197041 A1) hereinafter “Krebs” in view of Calman et.al. (US 8611601 B2) hereinafter “Calman”.
Regarding claims 21 and 31 Krebs teaches:
a communications unit; (See at least [0011] via: “….an image file is uploaded to the image analyzer 112 using an image provider 106 (FIG. 1) that is a mobile device…”)
a memory storing instructions; and (See at least [0035] via: “….Referring to FIG. 3, in one example, the image analyzer 112 is an image analyzer 112′. The image analyzer 112′ may include …a non-volatile memory 306 (e.g., hard disk, flash memory) … The non-volatile memory 306 may store computer instructions 312, an operating system 316 and data 318…”)
at least one processor coupled to the communications unit and to the memory, the at least one processor being configured to execute the instructions to: (See at least [0035] via: “….Referring to FIG. 3, in one example, the image analyzer 112 is an image analyzer 112′. The image analyzer 112′ may include a processor 302, a volatile memory 304, a non-volatile memory 306 (e.g., hard disk, flash memory) and the user interface (UI) 308 (e.g., a graphical user interface, a mouse, a keyboard, a display, touch screen and so forth). … In one example, the computer instructions 312 may be executed by the processor 302 out of volatile memory 304 to perform at least a portion of the processes described herein (e.g., process 200)…”)
receive, from a device via the communications unit, a first signal comprising image data that identifies a plurality of individuals associated with an exchange of data; (See at least [0011] via: “….Referring to FIG. 2… Process 200 receives an image (202)…an image file is uploaded to the image analyzer 112 using an image provider 106 (FIG. 1) that is a mobile device …”; in addition see at least [0012] via: “…an image of three people is recognized in the image …”; in addition see at least [0035] via: “…Referring to FIG. 3, in one example, the image analyzer 112 is an image analyzer 112′. The image analyzer 112′ may include a processor 302, a volatile memory 304, a non-volatile memory 306 (e.g., hard disk, flash memory) and the user interface (UI) 308 (e.g., a graphical user interface, a mouse, a keyboard, a display, touch screen and so forth)…”) The Examiner interprets under BRI “Exchange of data” to refer to a process of sharing or transferring data between different entities, systems or organizations. Krebs teaches in 0011 and 0012 the uploading of an image of three people which is related to exchange of data.
generate output data comprising a value of a first characteristic associated with eachof the plurality of individuals, based on the values of the first characteristic associated with at least two individuals of the plurality of individuals, perform operations that generate elements of relationship data characterizing a structure of a familial relationship between the at least two individuals generate candidate parameter valuesfor the exchange of data based on the value of the first characteristic associated with at least two of the individuals and on the relationship data; (See at least [0012] via: “… an image of three people is recognized in the image ...”; in addition see at least [0013] via: “…Process 200 identifies one or more objects in the image (212). For example, the image analyzer 112 identifies the one or more objects. The image analyzer 112 determines that an older person is female and most likely the mother while the two younger males are her children. The system 100 identifies that one of the sons is in a wheelchair…”; in addition see at least [0009] via: “…using facial recognition and/or object recognition programs, the image analyzer 112 may identify age and/or gender of one or more persons in the photo…”; in addition see at least [0036] via: “…The processes described herein (e.g., process 200) …may find applicability in any computing or processing environment and with any type of machine or set of machines that can run a computer program. The processes described herein may be implemented in hardware, software, or a combination of the two….”; in addition see at least [0003] and [claim 5] via: “… the image is a photo, the image is a video, … includes at least two people and the service is selected based upon an identified relationship between the at least two people…”)
and transmit the candidate parameter values to the device via the communications unit, the candidate parameter values representing discrete elements of a policy associated with the exchange of data, and the device being configured to present at least a portion of the candidate parameter values within a digital interface associated with the policy. (See at least [0013] via: “… Process 200 identifies one or more objects in the image (212). For example, the image analyzer 112 identifies the one or more objects. The image analyzer 112 determines that an older person is female and most likely the mother while the two younger males are her children. The system 100 identifies that one of the sons is in a wheelchair….”; in addition see at least [0014] via: “…Process 200 selects a service based on the one or more objects (216) and process 200 provides the service (222)…the image analyzer 112 selects from the service(s) 120….the image analyzer 112 provides at least one of home insurance, life insurance, health insurance, financial services, and banking services options based on a family situation…”; See figure 2; in addition see at least [0035] via: “… Referring to FIG. 3, …the image analyzer 112 is an image analyzer 112'. The image analyzer 112' may include a …user interface (UI) 308 (e.g., a graphical user interface, … a display, touch screen …)…”; in addition see at least [0033] via: “…delivery of a service can be provided through a digital user interface, for example, that enables a user to take and upload photos, video or live video stream. Information about the identified object(s) can be presented on the screen to provide information about the object and allow the user to take one or more actions...”; in addition see at least [0002] via: “… the one or more objects includes a person and the service is selected based upon an age of the person, the one or more objects includes at least two people and the service is selected based upon an identified relationship between the at least two people, the one or more objects includes a person with a disability and the service is selected based upon the disability, the one or more objects includes medical injury and the service is selected based upon the injury, the one or more objects includes food and the service is selected based upon a type of the food, the one or more objects includes food and the service is selected based upon a type of the food for food allergy detection, the one or more objects includes cooked meat and the service is selected based upon an identified doneness of the meat, the one or more objects includes a structure and the service is selected based damage to the structure, the one or more objects includes a structure and the service is selected based upon maintenance needed for the structure..”) The Examiner interprets the parameter value related to age.
However Krebs is silent the following claims that are taught by Calman:
perform operations that parse the received image data, extract a plurality of first image data elements from the image data, and associate each of the extracted first image data elements with an identifier assigned to a face of a corresponding one of the plurality of individuals wherein the extracted first image data element associated with each assigned identifier corresponds to a bounded region of the image data that includes at least the face of the corresponding one of the plurality of individuals (See at least [column 2, lines 8-18] via: “…computer program products are described herein that provide for using real-time video analysis, such as AR or the like, to assist the user of mobile devices with dynamically identifying individuals. Through the use of real-time image object recognition, facial features, facial symmetry, eye color, bone structure, hair color, hair style, body type, unique identifiers, clothing, locations and other features that can be recognized in a real-time video stream can be matched to images of individuals to assist the user with identifying one or more individuals…; in addition see at least [column 10, lines 22-42] via: “…The environment 350 contains a number of objects 320. Some of such objects 320 may include a marker 330 identifiable to the mobile device 200, in some embodiments through an object recognition application that is executed on the mobile device 200 or within the wireless network. A marker 330 may be any type of marker that is a distinguishing feature that can be interpreted by the object recognition application 225 to identify specific objects 320. For instance in identifying an individual, a marker 330 may be facial features, facial symmetry, eye color, bone structure, hair color, hair style, body type, unique identifiers, shapes, ratio of size of one feature to another feature, skin color, height etc. In some embodiments, the marker 330 may be audio and the mobile device 200 may be capable of utilizing audio recognition to identify words or the unique qualities of an individual's voice. The marker 330 may be any size, shape, etc. Indeed, in some embodiments, the marker 330 may be very small relative to the object 320 such as a mole or birth mark on an individual's skin, whereas, in other embodiments, the marker 330 may be the entire object 320 such as the unique height and proportion of the individual…”; in addition see at least [column 16, lines 62-67 and column 17, lines 1-22] via: “…Referring now to FIG. 6, which provides a process flow 600 for a system or apparatus for identifying an individual from the image captured in the real-time video stream 510. As shown in block 610, images that are available to the user are collected. … As shown in block 620, the identifiable characteristics from the images captured in the real-time video stream are compared with the images available to the user. Identifiable characteristics may include, facial features, bone structure, body shape, height, hair color, hair style, individually recognizable marks, etc. As represented by block 630, if the image comparison suggests a match, information about the individual is identified. Such information can include the individual's name, e-mail address or any other personally identifying information..”)
apply a trained machine learning process to input data that includes the extracted first image data elements based on the application of the trained machine learning process to the input data that includes the first image data elements (See at least [column 12, lines 9-27] via: “…The object recognition application 225 may use any type of means in order to identify desired objects 320. For instance, the object recognition application 225 may utilize one or more pattern recognition algorithms to analyze objects in the environment 350 and compare with markers 330 in data storage 271 which may be contained within the mobile device 200 (such as within integrated circuit 280) or externally on a separate system accessible via the connected network. For example, the pattern recognition algorithms may include decision trees, logistic regression, Bayes classifiers, support vector machines, kernel estimation, perceptrons, clustering algorithms, regression algorithms, categorical sequence labeling algorithms, real-valued sequence labeling algorithms, parsing algorithms, general algorithms for predicting arbitrarily-structured labels such as Bayesian networks and Markov random fields, ensemble learning algorithms such as bootstrap aggregating, boosting, ensemble averaging, combinations thereof, and the like…”; in addition see at least [column 15, lines 1-22] via: “…In some embodiments, the processor 210 may also be capable of operating one or more applications, such as one or more applications functioning as an artificial intelligence ("AI") engine. The processor 210 may recognize objects that it has identified in prior uses by way of the AI engine. In this way, the processor 210 may recognize specific objects and/or classes of objects, and store information related to the recognized objects in one or more memories and/or databases discussed herein. Once the AI engine has thereby "learned" of an object and/or class of objects, the AI engine may run concurrently with and/or collaborate with other modules or applications described herein to perform the various steps of the methods discussed. For example, in some embodiments the AI engine recognizes an object that has been recognized before and stored by the AI engine. The AI engine may then communicate to another application or module of the mobile device, an indication that the object may be the same object previously recognized. In this regard, the AI engine may provide a baseline or starting point from which to determine the nature of the object. In other embodiments, the AI engine's recognition of an object is accepted as the final recognition of the object…”; Examiner notes that bootstrap aggregating also called bagging (from bootstrap aggregating), is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression; Boosting is a method used in machine learning to reduce errors in predictive data analysis
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Krebs with Calman. Krebs discloses methods and apparatus to receive and process an image for the provision of services based on objects identified in the image, whereby, the objects include at least one person and the services includes medical information for an injury to the person, insurance based on age of the person, and where the objects include at least one structure and the services includes repair and/or maintenance for the structure. However, Krebs fails to disclose the use of artificial intelligence or machine learning in recognizing objects or images as disclosed by Calman. Combining the processing of an image for the provision of services based on objects identified in the image as taught by Krebs with use of machine learning since “Once the AI engine has thereby "learned" of an object and/or class of objects, the AI engine may run concurrently with and/or collaborate with other modules or applications described herein to perform the various steps of the methods discussed” (Calman col 15, lines 9-13) which could be helpful in identifying and determining with greater precision what specific individuals may need services.
Regarding claims 22 and 32 Krebs and Calman teaches the invention as claimed and detailed above with respect to claim 21 and 31 respectively Krebs also teaches:
wherein the first characteristic comprisesat least one of a physical or a demographical parameter of the individuals; (See at least [0012] via: “… an image of three people is recognized in the image ...”; in addition see at least [0009] via: “…using facial recognition and/or object recognition programs, the image analyzer 112 may identify age and/or gender of one or more persons in the photo…”; in addition see at least [0036] via: “…The processes described herein (e.g., process 200) …may find applicability in any computing or processing environment and with any type of machine or set of machines that can run a computer program. The processes described herein may be implemented in hardware, software, or a combination of the two….”) The Examiner interprets the first characteristic to be age of the individuals
the at least one processor is configured to execute the instructions to: (See at least [0036] via: “… Referring to FIG. 3, in one example, the image analyzer 112 is an image analyzer 112′. The image analyzer 112′ may include a processor 302, a volatile memory 304, a non-volatile memory 306 (e.g., hard disk, flash memory) and the user interface (UI) 308 (e.g., a graphical user interface, a mouse, a keyboard, a display, touch screen and so forth). The non-volatile memory 306 may store computer instructions 312, an operating system 316 and data 318. In one example, the computer instructions 312 may be executed by the processor 302 out of volatile memory 304 to perform at least a portion of the processes described herein (e.g., process 200)…”; in addition see at least [0039] via: “…The processing blocks (for example, in the process 200) associated with implementing the system may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system..”)
determine the structure of the familial relationship based on the first characteristic values associated with the at least two of the individuals; and (See at least [0003] and [claim 5] via: “… the image is a photo, the image is a video, … includes at least two people and the service is selected based upon an identified relationship between the at least two people…”; in addition see at least [0013] via: “… Process 200 identifies one or more objects in the image (212). For example, the image analyzer 112 identifies the one or more objects. The image analyzer 112 determines that an older person is female and most likely the mother while the two younger males are her children. The system 100 identifies that one of the sons is in a wheelchair…”)
generate the candidate parameter values based on portions of the first characteristic values and the determined structure of the familial relationship. (See at least [0014] via: “…Process 200 selects a service based on the one or more objects (216) and process 200 provides the service (222)…the image analyzer 112 selects from the service(s) 120….the image analyzer 112 provides at least one of home insurance, life insurance, health insurance, financial services, and banking services options based on a family situation…”; See figure 2; in addition see at least [0002] via: “… the one or more objects includes a person and the service is selected based upon an age of the person, the one or more objects includes at least two people and the service is selected based upon an identified relationship between the at least two people, the one or more objects includes a person with a disability and the service is selected based upon the disability, the one or more objects includes medical injury and the service is selected based upon the injury, the one or more objects includes food and the service is selected based upon a type of the food, the one or more objects includes food and the service is selected based upon a type of the food for food allergy detection, the one or more objects includes cooked meat and the service is selected based upon an identified doneness of the meat, the one or more objects includes a structure and the service is selected based damage to the structure, the one or more objects includes a structure and the service is selected based upon maintenance needed for the structure, the one or more objects includes a structure and the service is selected based upon maintenance needed for the structure…”) The Examiner interprets the first characteristic and parameter values to be related to the age of the individuals.
Regarding claim 23 Krebs and Calman teaches the invention as claimed and detailed above with respect to claim 21. Krebs also teaches:
wherein the at least one processor is configured to execute the instructions to:
[apply an additional trained machine learning process to input data], the additional input data comprising (i) the first characteristic values associated with the at least two of the individuals and (ii) one or more of the first image data elements; (See at least [0002] via: “… method comprises: receiving an image; processing the image to identify one or more objects in the image…”; in addition see at least [0009] via: “…using facial recognition and/or object recognition programs, the image analyzer 112 may identify age and/or gender of one or more persons in the photo…”; in addition see at least [0003] via: “…the one or more objects identified in the image … includes a person with a disability …”; in addition see at least [0013] via: “… The image analyzer 112 determines that an older person is … most likely the mother…The system 100 identifies that one of the sons is in a wheelchair…”; in addition see at least [0036] via: “…The processes described herein (e.g., process 200) …may find applicability in any computing or processing environment and with any type of machine or set of machines that can run a computer program. The processes described herein may be implemented in hardware, software, or a combination of the two….)
[based on the application of the additional trained machine learning process to the input data], determine a value of a second characteristic associated with the familial relationship, the second characteristic value being consistent with the first characteristic values, and the second characteristic value indicating the structure of the familial relationship; and generate the candidate parameter values based on portions of the first and second characteristic values. (See at least [0002] via: “… method comprises: receiving an image; processing the image to identify one or more objects in the image…”; in addition see at least [0009] via: “…using facial recognition and/or object recognition programs, the image analyzer 112 may identify age and/or gender of one or more persons in the photo…”; in addition see at least [0003] via: “…service module coupled to the image analyzer module for selecting a service based on the one or more objects identified in the image … the one or more objects includes a person with a disability and the service is selected based upon the disability…”; in addition see at least [0013] via: “… The image analyzer 112 determines that an older person is … most likely the mother…The system 100 identifies that one of the sons is in a wheelchair…”) The Examiner interprets the two individuals as the mother and son; the first characteristic the age, the second characteristic the gender identifying the son and the mother; and the parameter values as those obtained while selecting the service to be provided related to the age and gender of the individuals.
However Krebs is silent the following claims that are taught by Calman:
based on an application of the additional trained machine learning process to the additional input data (See at least [column 12, lines 9-27] via: “…The object recognition application 225 may use any type of means in order to identify desired objects 320. For instance, the object recognition application 225 may utilize one or more pattern recognition algorithms to analyze objects in the environment 350 and compare with markers 330 in data storage 271 which may be contained within the mobile device 200 (such as within integrated circuit 280) or externally on a separate system accessible via the connected network. For example, the pattern recognition algorithms may include decision trees, logistic regression, Bayes classifiers, support vector machines, kernel estimation, perceptions, clustering algorithms, regression algorithms, categorical sequence labeling algorithms, real-valued sequence labeling algorithms, parsing algorithms, general algorithms for predicting arbitrarily-structured labels such as Bayesian networks and Markov random fields, ensemble learning algorithms such as bootstrap aggregating, boosting, ensemble averaging, combinations thereof, and the like…”; in addition see at least [column 15, lines 1-22] via: “…In some embodiments, the processor 210 may also be capable of operating one or more applications, such as one or more applications functioning as an artificial intelligence ("AI") engine. The processor 210 may recognize objects that it has identified in prior uses by way of the AI engine. In this way, the processor 210 may recognize specific objects and/or classes of objects, and store information related to the recognized objects in one or more memories and/or databases discussed herein. Once the AI engine has thereby "learned" of an object and/or class of objects, the AI engine may run concurrently with and/or collaborate with other modules or applications described herein to perform the various steps of the methods discussed. For example, in some embodiments the AI engine recognizes an object that has been recognized before and stored by the AI engine. The AI engine may then communicate to another application or module of the mobile device, an indication that the object may be the same object previously recognized. In this regard, the AI engine may provide a baseline or starting point from which to determine the nature of the object. In other embodiments, the AI engine's recognition of an object is accepted as the final recognition of the object…”; Examiner notes that bootstrap aggregating also called bagging (from bootstrap aggregating), is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression; Boosting is a method used in machine learning to reduce errors in predictive data analysis
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Krebs with Calman. Krebs discloses methods and apparatus to receive and process an image for the provision of services based on objects identified in the image, whereby, the objects include at least one person and the services includes medical information for an injury to the person, insurance based on age of the person, and where the objects include at least one structure and the services includes repair and/or maintenance for the structure. However, Krebs fails to disclose the use of artificial intelligence or machine learning in recognizing objects or images as disclosed by Calman. Combining the processing of an image for the provision of services based on objects identified in the image as taught by Krebs with use of machine learning since “Once the AI engine has thereby "learned" of an object and/or class of objects, the AI engine may run concurrently with and/or collaborate with other modules or applications described herein to perform the various steps of the methods discussed” (Calman col 15, lines 9-13) which could be helpful in identifying and determining with greater precision what specific individuals may need services.
Regarding claim 24 and 33 Krebs and Calman teaches the invention as claimed and detailed above with respect to claims 21 and 31 respectively. However Krebs is silent the following claims that are taught by Calman:
wherein the at least one processor is configured to execute the instructions to:
recognize a face of each of the individuals within the image data based on an application of a trained facial recognition process to the image data; (See at least [column 15, lines 4-10] via: “…The processor 210 may recognize objects that it has identified in prior uses by way of the AI engine. In this way, the processor 210 may recognize specific objects and/or classes of objects…the AI engine has thereby "learned" of an object and/or class of objects…”; in addition see at least [column 11, lines 31-36] via: “….the mobile device 200 utilizes facial recognition markers 330 to identify the individuals…”)
determine at least one first spatial position associated with the each of the recognized faces within the image data. (See at least [column 10, lines 26-33] via: “…A marker 330 may be any type of marker that is a distinguishing feature that can be interpreted by the object recognition application 225 to identify specific objects 320. For instance in identifying an individual, a marker 330 may be facial features, facial symmetry, eye color, bone structure, hair color, hair style, body type, unique identifiers, shapes, ratio of size of one feature to another feature, skin color, height etc…”) The Examiner interprets the first spatial position to be facial symmetry
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Krebs with Calman. Krebs discloses methods and apparatus to receive and process an image for the provision of services based on objects identified in the image, whereby, the objects include at least one person and the services includes medical information for an injury to the person, insurance based on age of the person, and where the objects include at least one structure and the services includes repair and/or maintenance for the structure. However, Krebs fails to disclose recognizing individuals or identifying their facial features. Calman discloses recognizing individuals and identifying their facial features within an image. Combining the processing of an image for the provision of services based on objects identified in the image as taught by Krebs with recognizing individuals and identifying their facial features as taught by Calman could be helpful in identifying and determining with greater precision what specific individuals may need services.
Regarding claim 25 and 34 Krebs and Calman teaches the invention as claimed and detailed above with respect to claims 21 & 24 and 31 &33 respectively. Krebs teaches the following limitation of claims 25 and 34:
determine the first characteristic value for each of the individuals based on an analysis of the first image data elements, the first characteristic value for each of the individuals comprising an age, a gender, a height, or a weight of a corresponding one the individuals. (See at least [0009] via: “…using facial recognition and/or object recognition programs, the image analyzer 112 may identify age and/or gender of one or more persons in the photo….”)
However Krebs is silent regarding the following limitations of claims 25 and 34 that are taught by Calman:
wherein: the at least one processor is configured to execute the instructions to:
decompose the image data into the plurality of the first image data elements based on the first spatial positions, each of the first image data elements being associated with a corresponding one of the recognized faces; (See at least [column 10, lines 26-33] via: “…A marker 330 may be any type of marker that is a distinguishing feature that can be interpreted by the object recognition application 225 to identify specific objects 320. For instance in identifying an individual, a marker 330 may be facial features, facial symmetry, eye color, bone structure, hair color, hair style, body type, unique identifiers, shapes, ratio of size of one feature to another feature, skin color, height etc…”; in addition see at least [column 15, lines 4-10] via: “…The processor 210 may recognize objects that it has identified in prior uses by way of the AI engine. In this way, the processor 210 may recognize specific objects and/or classes of objects…the AI engine has thereby "learned" of an object and/or class of objects…”; in addition see at least [column 11, lines 31-36] via: “….the mobile device 200 utilizes facial recognition markers 330 to identify the individuals…”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Krebs with Calman. Krebs discloses methods and apparatus to receive and process an image for the provision of services based on objects identified in the image, whereby, the objects include at least one person and the services includes medical information for an injury to the person, insurance based on age of the person, and where the objects include at least one structure and the services includes repair and/or maintenance for the structure. However, Krebs fails to disclose recognizing individuals or identifying their facial features. Calman discloses recognizing individuals and identifying their facial features within an image. Combining the processing of an image for the provision of services based on objects identified in the image as taught by Krebs with recognizing individuals and identifying their facial features as taught by Calman could be helpful in identifying and determining with greater precision what specific individuals may need services.
Regarding claim 26 and 35 Krebs and Calman teaches the invention as claimed and detailed above with respect to claims 21 & 24 and 31 &33 respectively. Kalman teaches the following limitation:
[based on an application of the trained machine learning process to the input data], determine the value of the first characteristic for each of the individuals. and (See at least [0009] via: “…The image analyzer 112 analyzes the image received. For example, the image analyzer 112 searches for certain objects in the image. For example, the image analyzer 112 searches for objects that could be persons or things or both. The image analyzer 112 identifies one or more objects. For example, using facial recognition and/or object recognition programs, the image analyzer 112 may identify age and/or gender of one or more persons in the photo. In other examples, the image analyzer 112 identifies the type of objects..….”; in addition see at least [0012] via: “… Process 200 analyzes the image (208). For example, the image analyzer 112 analyzes the image to locate one or more objects in the image. In one example, an image of three people is recognized in the image and an inanimate object…”)
However Kalman is silent the following limitations of claims 26 and 35 that are taught by Calman:
wherein the at least one processor is configured to execute the instructions to:
based on an application of the trained machine learning process to the input data (See at least [column 12, lines 9-27] via: “…The object recognition application 225 may use any type of means in order to identify desired objects 320. For instance, the object recognition application 225 may utilize one or more pattern recognition algorithms to analyze objects in the environment 350 and compare with markers 330 in data storage 271 which may be contained within the mobile device 200 (such as within integrated circuit 280) or externally on a separate system accessible via the connected network. For example, the pattern recognition algorithms may include decision trees, logistic regression, Bayes classifiers, support vector machines, kernel estimation, perceptrons, clustering algorithms, regression algorithms, categorical sequence labeling algorithms, real-valued sequence labeling algorithms, parsing algorithms, general algorithms for predicting arbitrarily-structured labels such as Bayesian networks and Markov random fields, ensemble learning algorithms such as bootstrap aggregating, boosting, ensemble averaging, combinations thereof, and the like…”; in addition see at least [column 15, lines 1-22] via: “…In some embodiments, the processor 210 may also be capable of operating one or more applications, such as one or more applications functioning as an artificial intelligence ("AI") engine. The processor 210 may recognize objects that it has identified in prior uses by way of the AI engine. In this way, the processor 210 may recognize specific objects and/or classes of objects, and store information related to the recognized objects in one or more memories and/or databases discussed herein. Once the AI engine has thereby "learned" of an object and/or class of objects, the AI engine may run concurrently with and/or collaborate with other modules or applications described herein to perform the various steps of the methods discussed. For example, in some embodiments the AI engine recognizes an object that has been recognized before and stored by the AI engine. The AI engine may then communicate to another application or module of the mobile device, an indication that the object may be the same object previously recognized. In this regard, the AI engine may provide a baseline or starting point from which to determine the nature of the object. In other embodiments, the AI engine's recognition of an object is accepted as the final recognition of the object…”; Examiner notes that bootstrap aggregating also called bagging (from bootstrap aggregating), is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression; Boosting is a method used in machine learning to reduce errors in predictive data analysis
identify one or more facial features within each of the recognized faces based on the application of the trained facial recognition process to of the image data; (See at least [column 10, lines 26-33] via: “…A marker 330 may be any type of marker that is a distinguishing feature that can be interpreted by the object recognition application 225 to identify specific objects 320. For instance in identifying an individual, a marker 330 may be facial features, facial symmetry, eye color, bone structure, hair color, hair style, body type, unique identifiers, shapes, ratio of size of one feature to another feature, skin color, height etc…”)
determine second spatial positions associated with the one or more facial features within each of the recognized faces; (See at least [column 11, lines 31-36] via: “….the mobile device 200 utilizes facial recognition markers 330 to identify the individuals…”; in addition see at least [column 10, lines 26-33] via: “…A marker 330 may be any type of marker that is a distinguishing feature that can be interpreted by the object recognition application 225 to identify specific objects 320. For instance in identifying an individual, a marker 330 may be facial features, facial symmetry, eye color, bone structure, hair color, hair style, body type, unique identifiers, shapes, ratio of size of one feature to another feature, skin color, height etc…”) The Examiner interprets the second spatial position to be bone structure.
generate input data comprising one or more of the first image data elements and at least one of the first spatial positions or the second spatial positions; (See at least [column 2, lines 9-18] via: “….provide for using real-time video analysis, such as AR or the like, to assist the user of mobile devices with dynamically identifying individuals. Through the use of real-time image object recognition, facial features, facial symmetry, eye color, bone structure, hair color, hair style, body type, unique identifiers, clothing, locations and other features that can be recognized in a real-time video stream can be matched to images of individuals to assist the user with identifying one or more individuals…”; in addition see at least [column 13, lines 55-67 and column 14, lines 1-16] via: “….the user 310 may identify objects 320 that the object recognition application 225 does not identify and add it to the data storage 271 with desired information in order to be identified and/or displayed in the future. For instance, the user 310 may select an unidentified object 320 and enter the individual's name and/or any other desired information for the unidentified object 320. For instance, if the user 310 encounters an individual at a business meeting that she would like the object recognition application 225 to recall in a future video capture, the user 310 may record a video of the individual and/or capture a still picture of the individual and assign a virtual image 400 to the individual. In such embodiments, the object recognition application 225 may detect/record certain markers 330 (i.e. facial features, bone structure etc.) about the object 320 so that the pattern recognition algorithm(s) (or other identification means) may detect the object 320 in the future. Furthermore, in cases where the object information is within the data storage 271, but the object recognition application 225 fails to identify the object 320 (e.g., one or more identifying characteristics or markers 330 of the object has changed since it was added to the data storage 271 or the marker 330 simply was not identified), the user 310 may select the object 320 and associate it with an object 320 already stored in the data storage 271. In such cases, the object recognition application 225 may be capable of updating the markers 330 (e.g. changes to hair style, hair color, new tattoos etc.) for the object 320 in order to identify the object in future real-time video streams…; in addition see at least [column 16, lines 62-67 and column 14, lines 1-22] via: “… Referring now to FIG. 6, which provides a process flow 600 for a system or apparatus for identifying an individual from the image captured in the real-time video stream 510. As shown in block 610, images that are available to the user are collected. Such images may include, but are not limited to publicly available images, such as those available over the Internet, images from social networking sites of which the user 310 is a member, images stored on an accessible memory source and images preserved in hard copies such as printed pictures that are subsequently scanned or converted into digital images and stored on an accessible memory source. In some embodiments, the user may be able to access images maintained and stored by a third-party merchant or vendor. For instance, and without limitation, individuals may make images available to a third-party provider to be accessed by the object recognition application 225 for comparison to images captured in the real-time video stream rather than making the images generally available to each specific user or the general public. As shown in block 620, the identifiable characteristics from the images captured in the real-time video stream are compared with the images available to the user. Identifiable characteristics may include, facial features, bone structure, body shape, height, hair color, hair style, individually recognizable marks, etc. As represented by block 630, if the image comparison suggests a match, information about the individual is identified. Such information can include the individual's name, e-mail address or any other personally identifying information…”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Krebs with Calman. Krebs discloses methods and apparatus to receive and process an image for the provision of services based on objects identified in the image, whereby, the objects include at least one person and the services includes medical information for an injury to the person, insurance based on age of the person, and where the objects include at least one structure and the services includes repair and/or maintenance for the structure. However, Krebs fails to disclose recognizing individuals or identifying their facial features with the use of artificial intelligence as disclosed by Calman. Calman discloses recognizing individuals and identifying their facial features within an image with the use of artificial intelligence. Combining the processing of an image for the provision of services based on objects identified in the image as taught by Krebs with recognizing individuals and identifying their facial features with use of machine learning as taught by Calman could be helpful in identifying and determining with greater precision what specific individuals may need services. “Once the AI engine has thereby "learned" of an object and/or class of objects, the AI engine may run concurrently with and/or collaborate with other modules or applications described herein to perform the various steps of the methods discussed” (Calman col 15, lines 9-13)
Regarding claims 27 and 36 Krebs and Calman teaches the invention as claimed and detailed above with respect to claim 21 and 31 respectively. Krebs also teaches:
wherein the at least one processor is further configured to execute the instructions to:
recognize a physical object within the image data [based on an application of a trained object recognition process] to one or more second elements of the image data; and (See at least [0013] via: “…the image analyzer 112 identifies the one or more objects….”; in addition see at least [0002] via: “… the one or more objects includes a person and the service is selected based upon an age of the person, the one or more objects includes at least two people and the service is selected based upon an identified relationship between the at least two people, the one or more objects includes a person with a disability and the service is selected based upon the disability, the one or more objects includes medical injury and the service is selected based upon the injury, the one or more objects includes food and the service is selected based upon a type of the food, the one or more objects includes food and the service is selected based upon a type of the food for food allergy detection, the one or more objects includes cooked meat and the service is selected based upon an identified doneness of the meat, the one or more objects includes a structure and the service is selected based damage to the structure, the one or more objects includes a structure and the service is selected based upon maintenance needed for the structure, the one or more objects includes a structure and the service is selected based upon maintenance needed for the structure…”) The Examiner interprets the first characteristic and parameter values to be related to the age of the individuals.)
generate the candidate parameter values based on the first characteristic values associated with the at least two of the individuals, the determined structure of the familial relationship, and an object type associated with the recognized physical object. (See at least [0014] via: “…Process 200 selects a service based on the one or more objects (216) and process 200 provides the service (222). For example, the image analyzer 112 selects from the service(s) 120….”; in addition see at least [0013] via: “… Process 200 identifies one or more objects in the image (212). For example, the image analyzer 112 identifies the one or more objects. The image analyzer 112 determines that an older person is female and most likely the mother while the two younger males are her children. The system 100 identifies that one of the sons is in a wheelchair….”in addition see at least [0002] via: “… the one or more objects includes a person and the service is selected based upon an age of the person, the one or more objects includes at least two people and the service is selected based upon an identified relationship between the at least two people, the one or more objects includes a person with a disability and the service is selected based upon the disability, the one or more objects includes medical injury and the service is selected based upon the injury, the one or more objects includes food and the service is selected based upon a type of the food, the one or more objects includes food and the service is selected based upon a type of the food for food allergy detection, the one or more objects includes cooked meat and the service is selected based upon an identified doneness of the meat, the one or more objects includes a structure and the service is selected based damage to the structure, the one or more objects includes a structure and the service is selected based upon maintenance needed for the structure, the one or more objects includes a structure and the service is selected based upon maintenance needed for the structure…”; in addition see at least [0009] via: “…The image analyzer 112 analyzes the image received. For example, the image analyzer 112 searches for certain objects in the image. For example, the image analyzer 112 searches for objects that could be persons or things or both. The image analyzer 112 identifies one or more objects. For example, using facial recognition and/or object recognition programs, the image analyzer 112 may identify age and/or gender of one or more persons in the photo. In other examples, the image analyzer 112 identifies the type of objects…”; in in addition see at least [0016] via: “… an image of a meal is uploaded to the image analyzer 112. The image analyzer 112 identifies the type of food and provides from one or more of the services 120 at least one of an estimate of the calories in the meal, ties in medical advice and/or medical costs of consuming the meal. In one embodiment, a user can upload an image for each meal for analysis by the image analyzer 112 in exchange for a potential discount on medical insurance…” ) The Examiner interprets the first characteristic and parameter values to be related to the age of the individuals.
However Krebs is silent the following claims that are taught by Calman:
based on an application of a trained object recognition process (See at least [column 12, lines 9-27] via: “…The object recognition application 225 may use any type of means in order to identify desired objects 320. For instance, the object recognition application 225 may utilize one or more pattern recognition algorithms to analyze objects in the environment 350 and compare with markers 330 in data storage 271 which may be contained within the mobile device 200 (such as within integrated circuit 280) or externally on a separate system accessible via the connected network. For example, the pattern recognition algorithms may include decision trees, logistic regression, Bayes classifiers, support vector machines, kernel estimation, perceptrons, clustering algorithms, regression algorithms, categorical sequence labeling algorithms, real-valued sequence labeling algorithms, parsing algorithms, general algorithms for predicting arbitrarily-structured labels such as Bayesian networks and Markov random fields, ensemble learning algorithms such as bootstrap aggregating, boosting, ensemble averaging, combinations thereof, and the like…”; in addition see at least [column 15, lines 1-22] via: “…In some embodiments, the processor 210 may also be capable of operating one or more applications, such as one or more applications functioning as an artificial intelligence ("AI") engine. The processor 210 may recognize objects that it has identified in prior uses by way of the AI engine. In this way, the processor 210 may recognize specific objects and/or classes of objects, and store information related to the recognized objects in one or more memories and/or databases discussed herein. Once the AI engine has thereby "learned" of an object and/or class of objects, the AI engine may run concurrently with and/or collaborate with other modules or applications described herein to perform the various steps of the methods discussed. For example, in some embodiments the AI engine recognizes an object that has been recognized before and stored by the AI engine. The AI engine may then communicate to another application or module of the mobile device, an indication that the object may be the same object previously recognized. In this regard, the AI engine may provide a baseline or starting point from which to determine the nature of the object. In other embodiments, the AI engine's recognition of an object is accepted as the final recognition of the object…”; Examiner notes that bootstrap aggregating also called bagging (from bootstrap aggregating), is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression; Boosting is a method used in machine learning to reduce errors in predictive data analysis
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Krebs with Calman. Krebs discloses methods and apparatus to receive and process an image for the provision of services based on objects identified in the image, whereby, the objects include at least one person and the services includes medical information for an injury to the person, insurance based on age of the person, and where the objects include at least one structure and the services includes repair and/or maintenance for the structure. However, Krebs fails to disclose the use of artificial intelligence or machine learning in recognizing objects or images as disclosed by Calman. Combining the processing of an image for the provision of services based on objects identified in the image as taught by Krebs with use of machine learning since “Once the AI engine has thereby "learned" of an object and/or class of objects, the AI engine may run concurrently with and/or collaborate with other modules or applications described herein to perform the various steps of the methods discussed” (Calman col 15, lines 9-13) which could be helpful in identifying and determining with greater precision what specific individuals may need services.
Regarding claim 28 Krebs and Calman teaches the invention as claimed and detailed above with respect to claim 21 and 27. Krebs also teaches:
wherein the at least one processor is further configured to execute the instructions to determine the object type associated with the recognized physical object [based on the application of the trained object recognition process to the one or more second image data elements]. (See at least [0009] via: “…The image analyzer 112 analyzes the image received. For example, the image analyzer 112 searches for certain objects in the image. For example, the image analyzer 112 searches for objects that could be persons or things or both. The image analyzer 112 identifies one or more objects. For example, using facial recognition and/or object recognition programs, the image analyzer 112 may identify age and/or gender of one or more persons in the photo. In other examples, the image analyzer 112 identifies the type of objects…”; in addition see at least [0010] via: “…The one or more services 120 are provided based on the one or more objects identified in the image. In one example, a service may be at least one of a financial service, a banking service, an insurance service or a health service….”; in addition see at least [0014] via: “…Process 200 selects a service based on the one or more objects (216) and process 200 provides the service (222). For example, the image analyzer 112 selects from the service(s) 120. In one particular example, the image analyzer 112 provides at least one of home insurance, life insurance, health insurance, financial services, and banking services options based on a family situation. For example, a first one of the one or more objects is identified as an automobile and a selected service comprises automobile insurance. If the make, model and year of the automobile is identified, the automobile insurance service can include a cost estimate for the insurance service….”; in in addition see at least [0016] via: “… an image of a meal is uploaded to the image analyzer 112. The image analyzer 112 identifies the type of food and provides from one or more of the services 120 at least one of an estimate of the calories in the meal, ties in medical advice and/or medical costs of consuming the meal. In one embodiment, a user can upload an image for each meal for analysis by the image analyzer 112 in exchange for a potential discount on medical insurance…”)
However Krebs is silent the following claims that are taught by Calman:
based on the application of the trained object recognition process to the one or more second image data elements (See at least [column 12, lines 9-27] via: “…The object recognition application 225 may use any type of means in order to identify desired objects 320. For instance, the object recognition application 225 may utilize one or more pattern recognition algorithms to analyze objects in the environment 350 and compare with markers 330 in data storage 271 which may be contained within the mobile device 200 (such as within integrated circuit 280) or externally on a separate system accessible via the connected network. For example, the pattern recognition algorithms may include decision trees, logistic regression, Bayes classifiers, support vector machines, kernel estimation, perceptrons, clustering algorithms, regression algorithms, categorical sequence labeling algorithms, real-valued sequence labeling algorithms, parsing algorithms, general algorithms for predicting arbitrarily-structured labels such as Bayesian networks and Markov random fields, ensemble learning algorithms such as bootstrap aggregating, boosting, ensemble averaging, combinations thereof, and the like…”; in addition see at least [column 15, lines 1-22] via: “…In some embodiments, the processor 210 may also be capable of operating one or more applications, such as one or more applications functioning as an artificial intelligence ("AI") engine. The processor 210 may recognize objects that it has identified in prior uses by way of the AI engine. In this way, the processor 210 may recognize specific objects and/or classes of objects, and store information related to the recognized objects in one or more memories and/or databases discussed herein. Once the AI engine has thereby "learned" of an object and/or class of objects, the AI engine may run concurrently with and/or collaborate with other modules or applications described herein to perform the various steps of the methods discussed. For example, in some embodiments the AI engine recognizes an object that has been recognized before and stored by the AI engine. The AI engine may then communicate to another application or module of the mobile device, an indication that the object may be the same object previously recognized. In this regard, the AI engine may provide a baseline or starting point from which to determine the nature of the object. In other embodiments, the AI engine's recognition of an object is accepted as the final recognition of the object…”; Examiner notes that bootstrap aggregating also called bagging (from bootstrap aggregating), is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression; Boosting is a method used in machine learning to reduce errors in predictive data analysis
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Krebs with Calman. Krebs discloses methods and apparatus to receive and process an image for the provision of services based on objects identified in the image, whereby, the objects include at least one person and the services includes medical information for an injury to the person, insurance based on age of the person, and where the objects include at least one structure and the services includes repair and/or maintenance for the structure. However, Krebs fails to disclose the use of artificial intelligence or machine learning in recognizing objects or images as disclosed by Calman. Combining the processing of an image for the provision of services based on objects identified in the image as taught by Krebs with use of machine learning since “Once the AI engine has thereby "learned" of an object and/or class of objects, the AI engine may run concurrently with and/or collaborate with other modules or applications described herein to perform the various steps of the methods discussed” (Calman col 15, lines 9-13) which could be helpful in identifying and determining with greater precision what specific individuals may need services.
Regarding claims 29 and 37 Krebs and Calman teaches the invention as claimed and detailed above with respect to claim 21 and 31 respectively. Krebs also teaches:
wherein the device is configured to execute an application program, and the executed application program causing the device to:
present at least the portion of the candidate parameter values within the digital interface; (See at least [0009] via: “….The image analyzer 112 analyzes the image received. For example, the image analyzer 112 searches for certain objects in the image. For example, the image analyzer 112 searches for objects that could be persons or things or both. The image analyzer 112 identifies one or more objects. For example, using facial recognition and/or object recognition programs, the image analyzer 112 may identify age and/or gender of one or more persons in the photo. In other examples, the image analyzer 112 identifies the type of objects; in addition see at least [0010] via: “….The one or more services 120 are provided based on the one or more objects identified in the image. In one example, a service may be at least one of a financial service, a banking service, an insurance service or a health service…”;
perform operations that capture the image data via a digital camera or receive the image data from a third-party device; (See at least [0008] via: “….Referring to FIG. 1, a system 100 is an example of a system to provide a service using an image. The system 100 includes an image provider 106, an image analyzer 112 and one or more services. The image provider 106 provides an image (e.g., a photo or a video). In some examples, the image provider 106 may be a mobile device that includes a camera for taking photos and videos or a personal computer that includes photos and videos. In other embodiments, the image provider 106 comprises a database of image, videos, etc, for analysis by the image analyzer 112…”; in addition see at least [0011] via: “…. Referring to FIG. 2, an example of a process to provide a service using an image is a process 200. In one example, the image analyzer 112 performs the process 200. Process 200 receives an image (202). For example, an image file is uploaded to the image analyzer 112 using an image provider 106 (FIG. 1) that is a mobile device. In one example, a user is prompted to take a photograph of their family…”)
transmit the image data to the apparatus. (See at least [0015] via: “….In one example, process 200 may be used in medical imaging. For example, an image of an injury (e.g., broken leg, severed finger and so forth) is uploaded to the image analyzer 112. The image analyzer 112 will interpret the image and determine severity of the injury. The image analyzer 112 may send the information to the proper emergency services and/or deliver medical advice on how to address the injury. For example, if the image analyzer 112 interprets the image as including a severed finger, the user can be sent instructions for preserving the severed portion of the finger for re-attachment..”; in addition see at least [0016] via: “….an image of a meal is uploaded to the image analyzer 112. The image analyzer 112 identifies the type of food and provides from one or more of the services 120 at least one of an estimate of the calories in the meal, ties in medical advice and/or medical costs of consuming the meal. In one embodiment, a user can upload an image for each meal for analysis by the image analyzer 112 in exchange for a potential discount on medical insurance…”)
Regarding claim 39 Krebs teaches:
A device, comprising:
a display unit; (See at least [0035] via: “….Referring to FIG. 3, in one example, the image analyzer 112 is an image analyzer 112′. The image analyzer 112′ may include … the user interface (UI) 308 (e.g., a graphical user interface, a mouse, a keyboard, a display, touch screen and so forth)…”)
a communications unit; (See at least [0011] via: “….an image file is uploaded to the image analyzer 112 using an image provider 106 (FIG. 1) that is a mobile device.
a memory storing instructions; and (See at least [0035] via: “….Referring to FIG. 3, in one example, the image analyzer 112 is an image analyzer 112′. The image analyzer 112′ may include …a non-volatile memory 306 (e.g., hard disk, flash memory) … The non-volatile memory 306 may store computer instructions 312, an operating system 316 and data 318…”)
at least one processor coupled to the display unit, to the communications unit, and to the memory, the at least one processor being configured to execute the instructions to: (See at least [0035] via: “….Referring to FIG. 3, in one example, the image analyzer 112 is an image analyzer 112′. The image analyzer 112′ may include a processor 302, a volatile memory 304, a non-volatile memory 306 (e.g., hard disk, flash memory) and the user interface (UI) 308 (e.g., a graphical user interface, a mouse, a keyboard, a display, touch screen and so forth). … In one example, the computer instructions 312 may be executed by the processor 302 out of volatile memory 304 to perform at least a portion of the processes described herein (e.g., process 200)…”)
transmit to a computing system, via the communications unit, a signal comprising image data that identifies a plurality of individuals associated with an exchange of data and the computing system being configured to perform operations that generate output data comprising a value of a first characteristic associated with eachof the plurality of individuals, based on the values of the first characteristic associated with at least two individuals of the plurality of individuals, perform operations that generate elements of relationship data characterizing a structure of a familial relationship between the at least two individuals and generate candidate parameter values for the exchange of data based on the value of the first characteristic associated with the at least two of the individuals and on the relationship data; (See at least [0011] via: “….Referring to FIG. 2… Process 200 receives an image (202)…an image file is uploaded to the image analyzer 112 using an image provider 106 (FIG. 1) that is a mobile device …”; in addition see at least [0012] via: “…an image of three people is recognized in the image …”; in addition see at least [0012] via: “… an image of three people is recognized in the image ...”; in addition see at least [0009] via: “…using facial recognition and/or object recognition programs, the image analyzer 112 may identify age and/or gender of one or more persons in the photo…”; in addition see at least [0035] via: “…Referring to FIG. 3, in one example, the image analyzer 112 is an image analyzer 112′. The image analyzer 112′ may include a processor 302, a volatile memory 304, a non-volatile memory 306 (e.g., hard disk, flash memory) and the user interface (UI) 308 (e.g., a graphical user interface, a mouse, a keyboard, a display, touch screen and so forth)…”; in addition see at least [0036] via: “…The processes described herein (e.g., process 200) …may find applicability in any computing or processing environment and with any type of machine or set of machines that can run a computer program. The processes described herein may be implemented in hardware, software, or a combination of the two….”; in addition see at least [0003] and [claim 5] via: “… the image is a photo, the image is a video, … includes at least two people and the service is selected based upon an identified relationship between the at least two people…”)
receive, via the communications unit, data from the computing system that includes the candidate parameter values of the exchange of data, the candidate parameter values representing discrete elements of a policy associated with the exchange of data; and (See at least [0013] via: “… Process 200 identifies one or more objects in the image (212). For example, the image analyzer 112 identifies the one or more objects. The image analyzer 112 determines that an older person is female and most likely the mother while the two younger males are her children. The system 100 identifies that one of the sons is in a wheelchair….”; in addition see at least [0014] via: “…Process 200 selects a service based on the one or more objects (216) and process 200 provides the service (222)…the image analyzer 112 selects from the service(s) 120….the image analyzer 112 provides at least one of home insurance, life insurance, health insurance, financial services, and banking services options based on a family situation…”; See figure 2; in addition see at least [0035] via: “… Referring to FIG. 3, …the image analyzer 112 is an image analyzer 112'. The image analyzer 112' may include a …user interface (UI) 308 (e.g., a graphical user interface, … a display, touch screen …)…”; in addition see at least [0033] via: “…delivery of a service can be provided through a digital user interface, for example, that enables a user to take and upload photos, video or live video stream. Information about the identified object(s) can be presented on the screen to provide information about the object and allow the user to take one or more actions...”; in addition see at least [0002] via: “… the one or more objects includes a person and the service is selected based upon an age of the person, the one or more objects includes at least two people and the service is selected based upon an identified relationship between the at least two people, the one or more objects includes a person with a disability and the service is selected based upon the disability, the one or more objects includes medical injury and the service is selected based upon the injury, the one or more objects includes food and the service is selected based upon a type of the food, the one or more objects includes food and the service is selected based upon a type of the food for food allergy detection, the one or more objects includes cooked meat and the service is selected based upon an identified doneness of the meat, the one or more objects includes a structure and the service is selected based damage to the structure, the one or more objects includes a structure and the service is selected based upon maintenance needed for the structure, the one or more objects includes a structure and the service is selected based upon maintenance needed for the structure..”) The Examiner interprets the parameter value related to age.
perform operations that display, using the display unit, the candidate parameter values within a corresponding portion of a digital interface associated with the policy. (See at least [0002] via: “… the one or more objects includes a person and the service is selected based upon an age of the person, the one or more objects includes at least two people and the service is selected based upon an identified relationship between the at least two people, the one or more objects includes a person with a disability and the service is selected based upon the disability, the one or more objects includes medical injury and the service is selected based upon the injury, the one or more objects includes food and the service is selected based upon a type of the food, the one or more objects includes food and the service is selected based upon a type of the food for food allergy detection, the one or more objects includes cooked meat and the service is selected based upon an identified doneness of the meat, the one or more objects includes a structure and the service is selected based damage to the structure, the one or more objects includes a structure and the service is selected based upon maintenance needed for the structure, the one or more objects includes a structure and the service is selected based upon maintenance needed for the structure..”)
However Krebs is silent the following claims that are taught by Calman:
parse the image data, extract a plurality of first image data elements from the image data, and associate each of the extracted first image data elements with an identifier assigned to a face of a corresponding one of the plurality of individuals wherein the extracted first image data element associated with each assigned identifier corresponds to a bounded region of the image data that includes at least the face of the corresponding one of the plurality of individuals apply a trained machine learning process to input data that includes the extracted first image data elements (See at least [column 2, lines 8-18] via: “…computer program products are described herein that provide for using real-time video analysis, such as AR or the like, to assist the user of mobile devices with dynamically identifying individuals. Through the use of real-time image object recognition, facial features, facial symmetry, eye color, bone structure, hair color, hair style, body type, unique identifiers, clothing, locations and other features that can be recognized in a real-time video stream can be matched to images of individuals to assist the user with identifying one or more individuals…; in addition see at least [column 10, lines 22-42] via: “…The environment 350 contains a number of objects 320. Some of such objects 320 may include a marker 330 identifiable to the mobile device 200, in some embodiments through an object recognition application that is executed on the mobile device 200 or within the wireless network. A marker 330 may be any type of marker that is a distinguishing feature that can be interpreted by the object recognition application 225 to identify specific objects 320. For instance in identifying an individual, a marker 330 may be facial features, facial symmetry, eye color, bone structure, hair color, hair style, body type, unique identifiers, shapes, ratio of size of one feature to another feature, skin color, height etc. In some embodiments, the marker 330 may be audio and the mobile device 200 may be capable of utilizing audio recognition to identify words or the unique qualities of an individual's voice. The marker 330 may be any size, shape, etc. Indeed, in some embodiments, the marker 330 may be very small relative to the object 320 such as a mole or birth mark on an individual's skin, whereas, in other embodiments, the marker 330 may be the entire object 320 such as the unique height and proportion of the individual…”; in addition see at least [column 16, lines 62-67 and column 17, lines 1-22] via: “…Referring now to FIG. 6, which provides a process flow 600 for a system or apparatus for identifying an individual from the image captured in the real-time video stream 510. As shown in block 610, images that are available to the user are collected. … As shown in block 620, the identifiable characteristics from the images captured in the real-time video stream are compared with the images available to the user. Identifiable characteristics may include, facial features, bone structure, body shape, height, hair color, hair style, individually recognizable marks, etc. As represented by block 630, if the image comparison suggests a match, information about the individual is identified. Such information can include the individual's name, e-mail address or any other personally identifying information..”)
based on application of the trained machine learning process to input data that includes the first image data elements (See at least [column 12, lines 9-27] via: “…The object recognition application 225 may use any type of means in order to identify desired objects 320. For instance, the object recognition application 225 may utilize one or more pattern recognition algorithms to analyze objects in the environment 350 and compare with markers 330 in data storage 271 which may be contained within the mobile device 200 (such as within integrated circuit 280) or externally on a separate system accessible via the connected network. For example, the pattern recognition algorithms may include decision trees, logistic regression, Bayes classifiers, support vector machines, kernel estimation, perceptrons, clustering algorithms, regression algorithms, categorical sequence labeling algorithms, real-valued sequence labeling algorithms, parsing algorithms, general algorithms for predicting arbitrarily-structured labels such as Bayesian networks and Markov random fields, ensemble learning algorithms such as bootstrap aggregating, boosting, ensemble averaging, combinations thereof, and the like…”; in addition see at least [column 15, lines 1-22] via: “…In some embodiments, the processor 210 may also be capable of operating one or more applications, such as one or more applications functioning as an artificial intelligence ("AI") engine. The processor 210 may recognize objects that it has identified in prior uses by way of the AI engine. In this way, the processor 210 may recognize specific objects and/or classes of objects, and store information related to the recognized objects in one or more memories and/or databases discussed herein. Once the AI engine has thereby "learned" of an object and/or class of objects, the AI engine may run concurrently with and/or collaborate with other modules or applications described herein to perform the various steps of the methods discussed. For example, in some embodiments the AI engine recognizes an object that has been recognized before and stored by the AI engine. The AI engine may then communicate to another application or module of the mobile device, an indication that the object may be the same object previously recognized. In this regard, the AI engine may provide a baseline or starting point from which to determine the nature of the object. In other embodiments, the AI engine's recognition of an object is accepted as the final recognition of the object…”; Examiner notes that bootstrap aggregating also called bagging (from bootstrap aggregating), is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression; Boosting is a method used in machine learning to reduce errors in predictive data analysis
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Krebs with Calman. Krebs discloses methods and apparatus to receive and process an image for the provision of services based on objects identified in the image, whereby, the objects include at least one person and the services includes medical information for an injury to the person, insurance based on age of the person, and where the objects include at least one structure and the services includes repair and/or maintenance for the structure. However, Krebs fails to disclose the use of artificial intelligence or machine learning in recognizing objects or images as disclosed by Calman. Combining the processing of an image for the provision of services based on objects identified in the image as taught by Krebs with use of machine learning since “Once the AI engine has thereby "learned" of an object and/or class of objects, the AI engine may run concurrently with and/or collaborate with other modules or applications described herein to perform the various steps of the methods discussed” (Calman col 15, lines 9-13) which could be helpful in identifying and determining with greater precision what specific individuals may need services.
Regarding claim 40 Krebs and Calman teaches the invention as claimed and detailed above with respect to claim 39. Krebs also teaches:
further comprising a digital camera coupled to the at least one processor, the at least one processor being further configured to execute the instructions to receive at least a portion of the image data from the digital camera. (See at least [0008] via: “….Referring to FIG. 1, a system 100 is an example of a system to provide a service using an image. The system 100 includes an image provider 106, an image analyzer 112 and one or more services. The image provider 106 provides an image (e.g., a photo or a video). In some examples, the image provider 106 may be a mobile device that includes a camera for taking photos and videos or a personal computer that includes photos and videos. In other embodiments, the image provider 106 comprises a database of image, videos, etc, for analysis by the image analyzer 112…”; in addition see at least [0009] via: “….The image analyzer 112 analyzes the image received. For example, the image analyzer 112 searches for certain objects in the image. For example, the image analyzer 112 searches for objects that could be persons or things or both. The image analyzer 112 identifies one or more objects. For example, using facial recognition and/or object recognition programs, the image analyzer 112 may identify age and/or gender of one or more persons in the photo. In other examples, the image analyzer 112 identifies the type of objects..”; in addition see at least [0011] via: “…. Referring to FIG. 2, an example of a process to provide a service using an image is a process 200. In one example, the image analyzer 112 performs the process 200. Process 200 receives an image (202). For example, an image file is uploaded to the image analyzer 112 using an image provider 106 (FIG. 1) that is a mobile device. In one example, a user is prompted to take a photograph of their family…”; in addition see at least [0035] via: “…. Referring to FIG. 3, in one example, the image analyzer 112 is an image analyzer 112′. The image analyzer 112′ may include a processor 302, a volatile memory 304, a non-volatile memory 306 (e.g., hard disk, flash memory) and the user interface (UI) 308 (e.g., a graphical user interface, a mouse, a keyboard, a display, touch screen and so forth). The non-volatile memory 306 may store computer instructions 312, an operating system 316 and data 318. In one example, the computer instructions 312 may be executed by the processor 302 out of volatile memory 304 to perform at least a portion of the processes described herein (e.g., process 200)…”)
Claims 30, 38, 45 are rejected under 35 U.S.C. 103 as being un-patentable by Krebs in view of Calman in further view of Weston et.al. (US 20200349249 A1) hereinafter “Weston”.
Regarding claims 30 and 38 Krebs and Calman teaches the invention as claimed and detailed above with respect to claim 21 and 31 respectively. Krebs is silent the following claim that is taught by Calman:
wherein the image data identifies the plurality of individuals during a first temporal interval; (See at least [column 17, lines 27-31] via: “…The mobile device 200 captures a real-time video stream of a group of individuals. Either contemporaneously with the capture of the real-time video stream or at an earlier time, a plurality of images that are available to the user will be collected…”)
the at least one processor is further configured to execute the instructions to:
generate elements of training data associated with second temporal interval disposed prior to the first temporal interval, the generated elements of training data comprising additional elements of image data identifying the plurality of individuals during the second temporal interval and obtain elements of outcome data comprising characteristics of each of the plurality of individuals during the second temporal interval; (See at least [column 17, lines 31-38] via: “…the user 310 may scan a series of pictures from a high school year book and store these electronic images to a memory device, such as the memory 220, that can be accessed by the object recognition application 225. Alternatively, the object recognition application 225 may collect the images from an online source, for instance a class reunion website, a social networking site, the school's website etc…”)
perform operations that train the machine learning process based on an application of the machine learning process to the generated elements of training data; (See at least [column 15, lines 1-10] via: “…the processor 210 may … be capable of operating one or more applications, such as one or more applications functioning as an artificial intelligence ("AI") engine. The processor 210 may recognize objects that it has identified in prior uses by way of the AI engine. In this way, the processor 210 may recognize specific objects and/or classes of objects…the AI engine has thereby "learned" of an object and/or class of objects…”)
based on the determination that the metric value exceeds the threshold value apply the trained machine learning process to the elements of the image data that identify the plurality of individuals during the first temporal interval. (See at least [column 17, lines 38-44] via: “…These images are then compared to the images captured in the real-time video stream and the object recognition application 225 compares the markers 330 to similar 40 characteristics in the image files. If the object recognition application 225 suggests that an image available to the user is the same as an individual in the video stream, information about the individual will be identified…”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Krebs with Calman. Krebs discloses methods and apparatus to receive and process an image for the provision of services based on objects identified in the image, whereby, the objects include at least one person and the services includes medical information for an injury to the person, insurance based on age of the person, and where the objects include at least one structure and the services includes repair and/or maintenance for the structure. However, Krebs fails to disclose recognizing individuals or identifying their facial features with the use of artificial intelligence as disclosed by Calman. Calman discloses recognizing individuals and identifying their facial features within an image with the use of artificial intelligence. Combining the processing of an image for the provision of services based on objects identified in the image as taught by Krebs with recognizing individuals and identifying their facial features with use of machine learning as taught by Calman could be helpful in identifying and determining with greater precision what specific individuals may need services. “Once the AI engine has thereby "learned" of an object and/or class of objects, the AI engine may run concurrently with and/or collaborate with other modules or applications described herein to perform the various steps of the methods discussed” (Calman col 15, lines 9-13)
However Krebs and Calman are silent the following limitation that is taught by Weston:
based on the outcome data, generate a value of a metric characterizing an accuracy of the trained machine learning process, and determine that the metric value exceeds a threshold value (See at least [0026] via: “… the computer executable components can further comprise a confidence evaluation component that determines a level of confidence in the accuracy of the identity based on a degree of correspondence between identifying information determined for the person using the two or more independent identification technologies. The identification component can further indicate the determined confidence level in association with providing identification results and/or determine whether to accept or reject the identification result based on the confidence level being above or below a defined threshold…”; in addition see at least [0106] via: “…the system 200 can also include a verification component 202 that can determine whether the identification module 112 considers an identifier (e.g., the unique name, identification number, etc.) or identifiers determined for a person represented in the input data 101 correct or incorrect. In some implementations in which the confidence evaluation component 208 determines a confidence score for the identification result 110, the verification component 210 can further determine whether to verify the entity based on the confidence score. For example, the verification component 210 can apply a thresholding technique wherein the verification component 210 verifies the entity based on whether the confidence score is greater than a defined threshold. The verification component 210 can further include identity verification information in the identification result that indicates whether the system verifies the identity or not..”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Krebs and Calman with Weston. Krebs discloses methods and apparatus to receive and process an image of objects whereby, the objects include at least one person and where the image of objects include at least one structure. However, Krebs fails to disclose the use of artificial intelligence or machine learning in determining a level of confidence above a threshold level in the accuracy of identifying an individual as taught by Weston. Combining the processing of an image of individuals as taught by Krebs with use of machine learning in determining a level of confidence above a threshold level in the accuracy of identifying an individual could be helpful in providing a degree of certainty in determining familial relationships between individuals.
Regarding claim 45 Krebs and Calman teaches the invention as claimed and detailed above with respect to claim 21. Krebs and Calman are silent the following claim that is taught by Weston:
wherein the device comprises a smart watch, a wearable device, or a wearable form factor. (Weston) (See at least [0078] via: “…the input data 101 comprises image data captured of a person and/or an environment, the reception component 202 can receive the image data 102 from one or more cameras located in proximity to the person and/or the environment. For example, the reception component 202 can receive or extract the image data from … one or more cameras of a device associated with the person, such as smartphone, a wearable device, or the like)....”;
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Krebs and Calman with Weston. Krebs discloses methods and apparatus to receive and process an image of objects whereby, the objects include at least one person and where the image of objects include at least one structure. However, Krebs fails to disclose the use of a wearable device to capture an image as taught by Weston. Combining the processing of an image of individuals as taught by Krebs with use of a wearable device to capture images provides for a capturing a given image instantaneously with greater facility and ease, versus the use of a non-wearable device that requires finding programming of a non-wearable device.
Claim 42 is rejected under 35 U.S.C. 103 as being un-patentable by Krebs in view of Calman in further view of Zhang et.al. (US 20110142300 A1) hereinafter “Zhang”.
Regarding claim 42 Krebs and Calman teaches the invention as claimed and detailed above with respect to claim 21. Krebs and Calman are silent the following claim that is taught by Zhang:
wherein the at least one processor is further configured to execute the instructions to:
apply an additional, trained machine learning process to the values of the first characteristic associated with at least two of the individuals, and generate the relationship data based on the application of the additional, trained machine learning process to the values of the first characteristic associated with at least two of the individuals, the relationship data comprising a value of a second characteristic that indicates the structure of the familial relationship between the at least two individuals; and generate candidate parameter values of an exchange of data based on the values of the first characteristic associated with the at least two of the individuals and on the value of the second characteristic that indicates the structure of the familial relationship between the at least two individuals (See at least [0079] via: “…the relation application 310 includes a learning mechanism or uses a learning machine to identify … an individual. The learning mechanism and/or the learning machine can be trained with faces of individuals…”; in addition see at least [0014] via: “…The relation application 110 is an application which can be utilized in conjunction with the processor 120 to create and/or organize a relation tree. For the purposes of this application, a relation tree links members of a nuclear family to one another and links individuals associated with members of the nuclear family to the nuclear family…”; in addition see at least [0015] via: “…. An individual from the digital images 130 is identified by the relation application 110 to be a family member when the individual has facial features similar to at least one other nuclear family member. Further, the nuclear family includes parents of the nuclear family and any children of the parents..”; in addition see at least [0036] via: “….If the relation application 110 determines that an individual has a similar face structure, face feature, and/or face pattern with at least one other individual, then the relation application 110 will determine that the individual and at least one other individual have similar facial features..”; in addition see at least [0037] via: “….Utilizing the results from the facial similarity analysis and the facial recognition analysis, the relation application 110 can proceed to identify individuals with corresponding major clusters, … who have facial similarities similar to other individuals with corresponding major clusters as members of the nuclear family…, the relation application can utilize additional analysis and/or considerations when identifying members of the nuclear family..”; in addition see at least [0038] via: “…. Once the relation application 110 has identified members of the nuclear family, the relation application 110 can proceed to identify parents of the nuclear family and any children of the parents. When identifying the parents of the nuclear family, the relation application 110 utilizes a demographic analysis to identify an age and/or gender of members of the nuclear family…”; As seen in figure 3, upon preforming facial similarity analysis, the relation application 310 detects facial features that are similar between individual 1 and individuals 3, 4, 5, 6 and between individual 2 and individuals 3, 4, 5. In addition see at least [0081] via: “..As illustrated in FIG. 3, in response to the demographic analysis and/or the facial similarity analysis performed on Individual 1's face, the relation application 310 has identified that Individual 1 is a male of the age 40 and Individual 1 has facial features which are similar to Individuals 3, 4, 5, and 6. Additionally, the relation application 310 has identified that Individual 2 is a female of the age of 38 and Individual 2 has facial similarities with individuals 3, 4, and 5..“; In addition see at least [0082] via: “…Further, the relation application 310 has identified that Individual 3 is a male of the age of 16 and has facial features similar with Individuals 1, 2, 4, and 5, Individual 4 is a female of the age 14 and has facial features similar to Individuals 1, 2, 3, and 5, and Individual 5 is a male of the age of 10 and has facial features similar to Individuals 1, 2, 3, and 4...”; In addition see at least [0084] via: “…FIG. 4 illustrates individuals being identified as members of a nuclear family, extended family members, acquaintances of the nuclear family, and/or strangers. As noted above and as illustrated in FIG. 4, the individuals have corresponding clusters and details of the individuals have been identified in response to at least one facial analysis performed by a relation application 410..”; In addition see at least [0090] via: “…Further, utilizing results from the facial recognition analysis, the relation application 410 determines that Individuals 1 and 2 frequently appear next to one another in the digital images previously displayed in FIG. 2A. As a result, the relation application identifies Individuals 1 and 2 to be the parents of the nuclear family..”; In addition see at least [0098] via: “…As noted above and as illustrated in FIG. 5, Individuals 1, 2, 3, 4, and 5 have been identified by the relation application as members of the nuclear family. Additionally, Individuals 1 and 2 have been identified as the parents of the nuclear family and Individuals, 3, 4, and 5 have been identified as children of the nuclear family…”; In addition see at least [0099] via: “…As illustrated in FIG. 5, in one embodiment, the relation tree 500 is organized such that members of the nuclear family are at a central position of the relation tree 500. Additionally, the parents, Individuals 1 and 2, are positioned at the top of the nuclear family and are linked to one another. Additionally, the children, Individuals 3, 4, and 5, are positioned below the parents and are linked to the parents and to one another..”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Krebs and Calman with Zhang. Krebs discloses methods and apparatus to receive and process an image of objects whereby, the objects include at least one person and where the image of objects include at least one structure. However, Krebs fails to disclose the use of artificial intelligence or machine learning in recognizing at least two characteristics of at least two individuals to indicate a familial relationship between the individuals as taught by Zhang. Combining the processing of an image of individuals as taught by Krebs with use of machine learning to identify familial relationships between the individuals based on at least two characteristics could be helpful in identifying and determining familial relationships between individuals.
Claim 44 is rejected under 35 U.S.C. 103 as being un-patentable by Krebs in view of Calman in further view of Burford et.al. (US 2017/0256173 A1) hereinafter “Burford”.
Regarding claim 44 Krebs and Calman teaches the invention as claimed and detailed above with respect to claim 21. Krebs and Calman are silent the following claim that is taught by Burford:
obtain layout data characterizing interface elements of the digital interface; based on the layout data, perform operations that populate at least a subset of the interface elements with corresponding ones of the candidate parameter values, and store interface data characterizing each of the populated interface elements within a portion of the memory; and transmit, to the device via the communications interface, linking data associated with the stored interface data, the linking data causing the device to perform operations that access the stored interface elements from the portion the memory and present the at least a portion of the populated interface elements within the digital interface (See at least [0004] via: “..The computing system may populate a queryable graphical user interface (GUI) with one or more candidate query parameters, from which a user may define a query for portions of the stored data relevant to a particular intervention and its impact on a particular group of individuals. The computing system may provide data indicative of the populated GUI to a client device, which may present the populated GUI, and the candidate query elements, to a user through a corresponding display device. The user may, in some instances, provide input to the client device that identifies one or more of the candidate query parameters (e.g., a particular intervention, impacted individuals, demographic and/or geographic parameters, performance metrics, etc.), and the client device may transmit the identified candidate query parameters to the computing system..”; in addition see at least [0059] via: “..GUI data 208 may, in some instances, also include layout data indicative of a layout of the queryable GUI (e.g., when rendered by client device 104), a position of the interface elements within the queryable GUI, and/or an assignment of the queryable objective data elements, queryable interventions, impacted individuals or groups, and/or objective performance metrics to corresponding interface elements (e.g., drop-down menus, check boxes, etc.). In certain aspects, described in FIG. 2A, computing system 102 may transmit GUI data 208 across network 122 to client device 104 using any of the secure communication protocols outlined above (e.g., through an appropriate API maintained by an application executed by client device 104). Client device 104 may, in some aspects, receive GUI data 208, may render GUI data 208 for presentation, and may present a populated, queryable GUI (e.g., populated GUI 210) to the user through a corresponding display unit (e.g., a touch-screen display, etc.)…”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Krebs and Calman with Burford. Krebs discloses methods and apparatus to receive and process an image of objects whereby, the objects include at least one person and where the image of objects include at least one structure. However, Krebs fails to disclose populating a queryable interface with query parameters provided for presentation to a user as taught by Burford. Combining Krebs and Burford could be helpful in efficiently downloading required images to a user.
Prior Art Made of Record
The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure, and is listed in the attached form PTO-892 (Notice of References Cited). Unless expressly noted otherwise by the Examiner, all documents listed on form PTO-892 are cited in their entirety.
Gokturk (WO 2006122164 A2) – SYSTEM AND METHOD FOR ENABLING THE USE OF CAPTURED IMAGES THROUGH RECOGNITION- teaches: enabling retrieval of a collection of captured images that form at least a portion of a library of images. For each image in the collection, a captured image may be analyzed to recognize information from image data contained in the captured image, and an index may be generated, where the index data is based on the recognized information. Using the index, functionality such as search and retrieval is enabled. Various recognition techniques, including those that use the face, clothing, apparel, and combinations of characteristics may be utilized. Recognition may be performed on, among other things, persons and text carried on objects..
Response to Arguments
Applicant's arguments filed 10/22/2025 have been fully considered but they are not persuasive.
Applicant amended independent claims 21, 31 39, as posted in the above analysis with additions underlined.
In response to applicant's arguments regarding claim rejection under 35 U.S.C § 101:
Several steps are taken in the analysis as to whether an invention is rejected under 101. The first step is to determine if the claim falls within a statutory category. In this case it does for claims 21, 31 and 39 since the claims recite an apparatus and method of determining characteristics of possible customers in order to better target them with insurance services. The second step under 2A prong one is to determine if the claims recite an abstract idea, followed by the next step under 2A Prong Two to determine if additional elements in the claim imposes a meaningful limit on the abstract idea in order to integrate it into a practical idea and finally by the step under 2B to determine if additional elements of the claim provide an inventive concept.
2A Prong ONE
The Applicant argues that the claims do not recite an abstract idea. Specifically the Applicant argues that none of the independent claims recite explicitly the alleged abstract idea of "determining characteristics of possible customers in order to optimally targeting them with insurance services," and neither does the specification provide support. Furthermore the Applicant argues that the Office Action fails to provide any support, beyond conclusory statements, for its assertion that these quoted elements allegedly recited by Applicant's independent claims correspond to one of the methods of organizing human activity or mental processes deemed patent-ineligible by the 2019 Guidance.
The Examiner disagrees since the Applicant’s arguments are not persuasive. The Examiner explains the method used to select the abstract idea, which is to strip the additional elements from the claims. As seen below the recited boldened words constitute the abstract idea after stripping the un-boldened additional elements of amended limitation of claims 21, 31 and 39:
receive, from a device via the communications unit, a first signal comprising image data that identifies a plurality of individuals associated with an exchange of data;
perform operations that parse the received image data extract a plurality of first image data elements from the image data, and associate each of the extracted first image data elements with an identifier assigned to a face of a corresponding one of the plurality of individuals wherein the extracted first image data element associated with each assigned identifier corresponds to a bounded region of the image data that includes at least the face of the corresponding one of the plurality of individuals;
apply a trained machine learning process to input data that includes the extracted first image data elements, and
based on the application of the trained machine learning process to the input data that includes the first image data elements generate output data comprising a value of a first characteristic associated with each
based on the values of the first characteristic associated with at least two individuals of the plurality of individuals, perform operations that generate elements of relationship data characterizing a structure of a familial relationship between the at least two individuals;
generate candidate parameter values for the exchange of data based on the values
and transmit the candidate parameter values to the device via the communications unit, the candidate parameter values representing discrete elements of a policy associated with the exchange of data, and the device being configured to present at least a portion of the candidate parameter values within a digital interface associated with the policy.
The selected abstract idea (boldened limitations) of claims 21, 31 and 39 can be performed by a human with pencil and paper. Hence they belong to the grouping of mental processes under concepts performed in the human mind (including an observation, evaluation, judgement, opinion) as it recites determining characteristics of possible customers in order to optimally targeting them with insurance services. Alternatively they belong to the grouping of certain methods of organizing human activity under fundamental economic practices (including insurance) as it recites determining characteristics of possible customers in order to optimally targeting them with insurance services. (refer to MPP 2106.04(a)(2)). Accordingly independent claims 21, 31 and 39 recite an abstract idea. The claim limitations make reference to a policy associated with the data exchange, which is interpreted by the Examiner as relating to an insurance policy. This is further supported by the specification (028) which cites financial institutions and insurance policies: “In some instances, provisioning system 130 may be associated with, or operated by, a financial institution, and insurance company, or other business or organizational entity that underwrites or issues one or more insurance policies to, or on behalf of, corresponding customers or beneficiaries, such as user 101 and one or more family members of user 101. Examples of these insurance policies include, but are not limited to, a term life insurance policy, a whole life insurance policy, a health insurance policy (e.g., a medical, dental, and/or vision insurance policy), a homeowner's insurance policy, a vehicle insurance policy, and any additional, or alternate, insurance policy issued to user 101 and listing user 101 or the one or more family members as beneficiaries. Further, and as described herein, provisioning system 130 may also be configured to provision one or more executable application programs to one or more network-connected devices operating within environment 100, such as executable insurance application 106 maintained by client device 102.
2A Prong TWO
Applicant argues that even if Applicant's amended independent claims 21, 31 and 39 could recite an abstract idea, Applicant's claims nevertheless integrate the allegedly recited abstract idea into a patent-eligible, practical application. The Applicant further supports his argument by stating that the claims provide a specific, technological improvement to an existing technology or technical field, and as such, integrate any allegedly recited abstract idea into a patent-eligible, practical application. The technological improvements cited by Applicant include: dynamically provision exchanges of data based on machine learning processes that detect familial relationships of individuals within processed image data, in real-time and contemporaneously with the received image data, represent a specific, technological improvement to existing provisioning processes.
The Examiner disagrees since the Applicant’s arguments are not persuasive. The Examiner interpreted the additional elements:
Claims 21, 31, 39 recite:
a device
apply a trained machine learning process
application of the trained machine learning process
digital interface
Claim 21 recites:
a communications unit;
a memory storing instructions; and
at least one processor coupled to the communications unit and to the memory, the at least one processor being configured to execute the instructions
Claim 39 recites:
a display unit;
a communications unit;
a memory storing instructions;
at least one processor coupled to the display unit, to the communications unit, and to the memory, the at least one processor being configured to execute the instructions;
Computing system.
These additional elements amount to mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea. (refer to MPEP 2106.05(f)). Accordingly, the claim as a whole does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea because it does not impose any meaningful limits on practicing the abstract idea
Further support can be found in paragraph (0047) of the specification where under the heading of “Exemplary Computer-Implemented Processes for Dynamically Provisioning
Exchanges of Data Based on Processed Image Data” the listed generic tools include: a network-connected computing system; a network-connected device, such as client device; and in (025} provides examples of client device as including: “a personal computer, a laptop computer, a tablet computer, a notebook computer, a hand-held computer, a personal digital assistant, a portable navigation device, a mobile phone, a smartphone, a wearable computing device (e.g., a smart watch, a wearable activity monitor, wearable smart jewelry, and glasses and other optical devices that include optical head-mounted displays (OHMDs)), an embedded computing device (e.g., in communication with a smart textile or electronic fabric), and any other type of computing device that may be configured to store data and software instructions, execute software instructions to perform operations, and/or display information on an interface module. All of these interpreted to be generic devices by the Examiner.
In order to integrate the abstract idea into a practical idea the Applicant could demonstrate at least one of the conditions enumerated below applies:
Improvements to the functioning of a computer, or to any other technology or technical field - see MPEP 2106.05(a)
Applying the judicial exception with, or by use of, a particular machine - see MPEP 2106.05(b)
Effecting a transformation or reduction of a particular article to a different state or thing - see MPEP 2106.05(c)
Applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception - see MPEP 2106.05(e) and Vanda Memo
The Applicant has not demonstrated any of the above listed conditions.
Prong 2B
The Applicant offers similar arguments regarding Prong 2B as provided under 2A Prong TWO. The Examiner disagrees since the Applicant’s argument is not persuasive. The recited additional elements amount to mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea (refer to MPEP 2106.05(f)). Accordingly, the claim does not provide an inventive concept (significantly more than the abstract idea) and hence the claim is ineligible.
In order evaluate whether the claim recites additional elements that amount to an inventive concept what could be shown is:
Adding a specific limitation (unconventional other than what is well-understood, routine, conventional (WURC) activity in the field - see MPEP 2106.05(d)
The Applicant has not demonstrated the above listed conditions.
In response to applicant's arguments regarding claim rejection under 35 U.S.C § 103:
The Applicant argues that Krebs in view of Calman fail to teach the following limitations of claims 21, 31, & 39:
receive, from a device via the communications unit, a first signal comprising image data that identifies a plurality of individuals associated with an exchange of data
perform operations that parse the received image data extract a plurality of first image data elements from the image data, and associate each of the extracted first image data elements with an identifier assigned to a face of a corresponding one of the plurality of individuals wherein the extracted first image data element associated with each assigned identifier corresponds to a bounded region of the image data that includes at least the face of the corresponding one of the plurality of individuals
generate candidate parameter values for exchange of data associated with the individuals within the received image data based on the values
candidate parameter values representing discrete elements of a policy associated with the exchange of data
input data that includes the extracted first image data elements,.. generate output data comprising a value of a first characteristic associated with eachplurality of individuals.
Regarding the first limitation:
The Applicant argues that Krebs in combination with Calman fails to teach “the plurality of individuals being associated with an exchange of data" as recited similarly by amended independent claims 21, 31, and 39.
The Examiner disagrees with Applicant’s arguments since they are not persuasive. The Examiner interprets under BRI “Exchange of data” to refer to a process of sharing or transferring data between different entities, systems or organizations. Krebs teaches in 0011 and 0012 the uploading of an image of three people which is related to exchange of data.
Regarding the second limitation:
the Applicant argues that Calman at most disclose a “computer program” that performs “real- time image object recognition” on a “real-time video stream” to detect “facial features, facial symmetry, eye color, bone structure, hair color, hair style, body type, unique identifiers, clothing, [and] locations.”. However nowhere, does Calman teach or suggest processes that extract image data elements from the “real-time video stream” that include a bounded region that includes a face of a corresponding individual. Rather than disclosing a bounded region of a face of an individual, the cited portions of Calman at most refer to identifiable characteristics that include “facial features, facial symmetry, eye color, bone structure, hair color, hair style, body type, unique identifiers, clothing, locations,” and “body shape, height, [and] individually recognizable marks”.
The Examiner disagrees with Applicant’s arguments since they are not persuasive. Calman in [column 16, lines 62-67 and column 17, lines 1-22] teaches: “…Referring now to FIG. 6, which provides a process flow 600 for a system or apparatus for identifying an individual from the image captured in the real-time video stream 510. As shown in block 610, images that are available to the user are collected. … As shown in block 620, the identifiable characteristics from the images captured in the real-time video stream are compared with the images available to the user. Identifiable characteristics may include, facial features, bone structure, body shape, height, hair color, hair style, individually recognizable marks, etc. As represented by block 630, if the image comparison suggests a match, information about the individual is identified. Such information can include the individual’s name, e-mail address or any other personally identifying information..”. Calman refers to block 610, where it is shown that images available to the user are collected. The Examier interprets this as synonymous with extracting image data. Furthermore Calman refers to Fig. 6 that provides a process to identify an individual from the captured image, with Identifiable characteristics that may include, facial features, bone structure, body shape, height, hair color, hair style, individually recognizable marks, etc. The Examiner interprets the identification of the characteristics of an individual including facial features to be bounded by the face of the individual
Regarding the third and fourth limitation:
The Applicant further argues that even assuming that the cited portions of Krebs could disclose the claimed “values of the first characteristic” or the claimed “relationship data, nowhere does Krebs, teach the third and fourth limitations.
The Examiner disagrees with Applicant’s arguments since they are not persuasive. In Krebs [0009] Krebs recites: “using facial recognition and/or object recognition programs, the image analyzer 112 may identify age and/or gender of one or more persons in the photo…”. The Examiner interprets the parameter value related to age. In Krebs [0002] Krebs recites: “a person and the service is selected based upon an age of the person”, and in [0014] Krebs recites: “Process 200 selects a service based on the one or more objects (216) and process 200 provides the service (222)…the image analyzer 112 selects from the service(s) 120….the image analyzer 112 provides … health insurance options”. The Examiner Interprets that the service provided is health insurance
Regarding the fifth limitation:
The Applicant argues that Krebs and Calman fail to teach "input data that includes the first image data elements extracted from the image data," as claimed, or generates "output data comprising a value of a first characteristic associated with each of the
individuals,"
The Examiner disagrees with Applicant’s arguments since they are not persuasive. The limitation is taught as a combination of Krebs with Calman. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Krebs with Calman. Krebs discloses methods and apparatus to receive and process an image for the provision of services based on objects identified in the image, whereby, the objects include at least one person and the services includes medical information for an injury to the person, insurance based on age of the person, and where the objects include at least one structure and the services includes repair and/or maintenance for the structure. However, Krebs fails to disclose the use of artificial intelligence or machine learning in recognizing objects or images as disclosed by Calman. Combining the processing of an image for the provision of services based on objects identified in the image as taught by Krebs with use of machine learning since “Once the AI engine has thereby "learned" of an object and/or class of objects, the AI engine may run concurrently with and/or collaborate with other modules or applications described herein to perform the various steps of the methods discussed” (Calman col 15, lines 9-13) which could be helpful in identifying and determining with greater precision what specific individuals may need services.
For reasons of record and as set forth above, the examiner maintains the rejection of claims 21-40, 42, 44-45 as being directed to a judicial exception without significantly more, and thereby being directed to non-statutory subject matter under 35 USC §101. in addition claims 21-40, 42, 44-45 are rejected under 35 U.S.C.103 as being unpatentable over prior art. In reaching this decision, the Examiner considered all evidence presented and all arguments actually made by the Applicant.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PIERRE L MACCAGNO whose telephone number is (571)270-5408. The examiner can normally be reached M-F 8:00 to 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid can be reached at (571)270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PIERRE L MACCAGNO/Examiner, Art Unit 3687
/STEVEN G.S. SANGHERA/Primary Examiner, Art Unit 3684