DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
The Amendment filed on November 24, 2025 has been entered. The examiner
acknowledges the amendments to claims 1, 21-22.
Rejections under 35 U.S.C. § 101: Applicant’s amendments to independent claim 1, cites ‘…causing a modification of a stock of products, causing an increase in a stock of [ ] products at the physical location, and causing a decrease in a stock [ ] of products at the physical premises…’ [Claim 1], but support for this claim in the specification including mechanisms or structure delivering the claimed results are not apparent. Amendments to claims 21-22 cite performing the method of claim 1 and are similarly impacted, thus the existence of a practical application cannot be determined from the claim.
Applicant argues that support for modifying, increasing or decreasing stock levels is provided in [0016], but the mechanism(s) and additional elements that perform the operations to achieve the stock changes in response to the predictions are not identified. Providing information by causing a message or GUI of the prediction to be presented on a screen to a user is a known and conventional means of communication, but falls short of a practical application of the knowledge derived from the invention. There is no indication of how or why the knowledge would cause any kind of change in product inventory (stock of products) to occur.
Additional arguments cite how a merchant can maintain stock in a more efficient/effective manner, anticipate purchases, and meet anticipated demand. The Examiner sees the invention as informing the merchant but relies on the merchant to take appropriate actions, a passive role in increasing, decreasing or modifying stock levels. This does not support an argument that the invention demonstrates a practical application.
Arguments that cite computer-specific elements to automatically generate recommendations and use these recommendations to modify a stock of products, that further claim performance in a consistent efficient and objective manner without direct human input and without relying on subjective human factors appear to overlook the description of a merchant in [0016]. This paragraph includes the role of the merchant, who can purchase and maintain a stock of products or more effectively anticipate future purchases, and in doing so is likely employing his/her subjective human factors and human inputs to achieve the desire results.
In view of the above discussion, the Examiner finds the arguments for reconsideration not compelling and the request to withdraw the rejections of the prior art claims based on 35 U.S.C. § 101 is denied.
Rejections under 35 U.S.C. § 103: Applicant’s argues in claim 1 that prior art does not teach that the first tokens are arranged sequentially according to at least one of the ‘price, purchase frequency of a user, or purchase frequency of a number of users.’ Examiner notes that prior art cites “arrange the tokens in a sequence to represent object behaviors,” and that claimed ‘price, purchase frequency, and purchase frequency by a respective one of users’ is easily interpreted as object behavior, despite application in a different context, such as analysis of purchase data, hence the continuing argument that dissimilar subject matter employing foundational concepts such as the use of sequentially ordered tokens ignores the application of a fundamental unit of data simply because the example is drawn from another application area.
This argument is not compelling and the rejections based on 35 U.S.C. § 103 will not be withdrawn.
Claim Rejections - 35 USC § 112
3. The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
4. The claims 1-6, 10-18, 21-25 are rejected under 35 U.S.C. 112(a) as failing to comply with the written description requirement thereof since the amendment is not supported by the original disclosure. The original disclosure does not reasonably convey to a designer of ordinary skill in the art that the inventor was in possession of the design now claimed at the time the application was filed. See In re Daniels, 144 F.3d 1452, 46 USPQ2d 1788 (Fed. Cir. 1998); In re Rasmussen, 650 F.2d 1212, 211 USPQ 323 (CCPA 1981).
Specifically, there is no support in the original disclosure concerning how a modification of a stock of a product or how an increase in a stock of a first subset of the products or how a decrease in a stock of a second subset of products occurs. There is no apparent physical mechanism or process in the specification that enables one of ordinary skill in the art to be able to replicate why or how the modifications in stock (levels) occur.
To overcome this rejection, applicant may attempt to demonstrate (by means of argument or evidence) that the original disclosure establishes that the inventor had possession of the amended claim or what elements in the invention cause the changes in the stock levels that are stated in the amended claim 1.
Claim Interpretation
The examiner will interpret “causing a modification of a stock of products at a physical premises based on the third data set, including
causing an increase in a stock of a first subset of the products at the physical premises based on the third data set, and
causing a decrease in a stock of a second subset of the products at the physical premises based on the third data structure,” in claim 1 as any structure, material, or acts that cause an increase, decrease or modification of a stock or supply level for any type of product or inventory item and examine accordingly, [0016].
Claim Rejections – 35 U.S.C. § 101
35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6, 10-18, 21-25 are rejected under 35 U.S.C. § 101 because the claimed
invention is directed to non-statutory subject matter. The claims, 1-6, 10-18, 21-25 are directed to a judicial exception (i.e., law of nature, natural phenomenon, abstract idea) without providing significantly more.
Step 1
Step 1 of the subject matter eligibility analysis per MPEP § 2106.03, required the claims to be a process, machine, manufacture or a composition of matter. Claims 1-6, 10-18, 21-25 are directed to a process (method), machine (system), and product/article of manufacture, which are statutory categories of invention.
Step 2A
Claims 1-6, 10-18, 21-25 are directed to abstract ideas, as explained below.
Prong one of the Step 2A analysis requires identifying the specific limitation(s) in the claim under examination that the examiner believes recites an abstract idea, and determining whether the identified limitation(s) falls within at least one of the groupings of abstract ideas of mathematical concepts, mental processes, and certain methods of organizing human activity.
Step 2A-Prong 1
The claims recite the following limitations that are directed to abstract ideas, which can be summarized as being directed to a method, the abstract idea, of analyzing a shopper’s purchasing actions and identifying trends, correlating products and purchases to both suggest additional products, and plan inventory based on predictions.
Claim 1 discloses, a method comprising:
receiving, a first data set representing a plurality of first purchases of a plurality of first products by a plurality of first users, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities),
wherein the first data set comprises a plurality of first data strings, each
having a respective sequence of first tokens, (following rules or instructions, observation, evaluation, judgement, opinion),
wherein for each of the first data strings:
the first data string represents a respective one of the first purchases, (following rules or instructions, observation, evaluation,
judgement, opinion, advertising, marketing, sales activities), and
each of the first tokens of the first data string represents a respective one of the first products purchased in the respective one of the first purchases in a single transaction,
wherein for each of the first data strings, the first tokens of that first data string are arranged sequentially according to at least one of:
a price of a respective one of the first products,
a purchase frequency of a respective one of the first products by the first users, or
a purchase frequency of a respective one of the first products by a respective one of the first users; (following rules or
instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities),
training, a model using the first data set as an input, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities),
to process the input iteratively for each of the first tokens, the encoding position information indicating that a respective one of the first products was purchased in a respective one of the first products was purchased in a respective one of the first purchases, (following rules or instructions, observation, evaluation, judgement, opinion, sales activities),
and
receiving, a second data set comprising a second data string representing one or more second products selected by a second user for purchase, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities),
wherein the second data string comprises one or more second tokens, and wherein each of the one or more second tokens represents a respective
one of the one or more second products; (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities),
providing, the second data set to the model; (following rules or instructions, observation, evaluation, judgement, opinion),
outputting, a third data set generated by the model based on the second data set, wherein the third data set represents a prediction of one or more third products for purchase by the second user; (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities),
causing, a message to be presented to a user, the message or the graphical user interface representing at least some of the one or more third products; (following rules or instructions, observation, evaluation, judgement, opinion, advertising marketing, sales activities),
storing, the third data set using one or more computer storage devices; and
causing a modification of a stock of products at a physical premises based on the third data set, (following rules or instructions, observation, evaluation, judgement, opinion, advertising marketing, sales activities),
including causing an increase in a stock of a first subset of the products at the physical premises based on the third data set, (following rules or instructions, observation, evaluation, judgement, opinion, advertising marketing, sales activities),
and
causing a decrease in a stock of a second subset of the products at the physical
premises based on the third data structure, (following rules or instructions, observation, evaluation, judgement, opinion, advertising marketing, sales activities).
Additional limitations employ the method with the plurality of first users comprises the second user, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities - claim 2), where the third data set comprises the third data string and is comprised of third tokens each representing a third product, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities – claim 3), where at least one of the first, second or third tokens is a SKU associated with one of the first, second, or third products, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities – claim 4), where at least one of the first, second or third tokens indicates the beginning of at least one of the first, second, or third strings of data, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities – claim 5), where one of the first, second or third tokens is a respective token indicating the end of at least one of the first, second, or third data strings, (following rules or instructions, observation, evaluation, judgement, opinion – claim 6), wherein the first data set is first embedded data of the time when each of the first products was purchased by the first users, (following rules or instructions, observation, evaluation, judgement, opinion – claim 10), where the first tokens are the embedded data, (following rules or instructions, observation, evaluation, judgement, opinion – claim 11), where the embedded data and the first tokens have different respective data structures, (following rules or instructions, observation, evaluation, judgement, opinion – claim 12), and the second data set represents one or more second products chosen by the second user for purchase on-line, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities – claim 13), and the second data set represents one or more second products chosen by the second user for purchase at a physical location, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities – claim 14), causing a message to go to the second user with an indication of a third product, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities – claim 15), estimating based on the third data set a future stock level for third products, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities – claim 16), receiving a fourth data set representing fourth products, providing fourth data set to the model obtaining a fifth third data set based on the fourth data set, where the fifth data set is a prediction of one or more fifth products that are related to the fourth products and storing the fifth data set, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities – claim 17), where one of the fourth products is a subset of the second products, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities – claim 18), where each of the first tokens of the first data strings are arranged sequentially according to the purchase frequency of the respective one of the first products by the first users, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities – claim 23), where for each of the first data strings, the first tokens of that first data string are arranged sequentially according to the purchase frequency of the respective one of the first products by the first users, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities – claim 24), and where for each of the first data strings, the first tokens of that first data string are arranged sequentially according to the purchase frequency of a respective one of the first products by a respective one of the first users, (following rules or instructions, observation, evaluation, judgement, opinion, advertising, marketing, sales activities – claim 25).
Each of these claimed limitations employ: organizing human activity in the form of following rules or instructions, sales activities or behaviors; performing mental processes including, observation, evaluation, judgement, and opinion; as well as organizing human activity – commercial or legal interactions, advertising, marketing or sales activities.
Claims 21-22 recite similar abstract ideas as those identified with respect to claims 1-6, and 10-18.
Thus, the concepts set forth in claims 1-6, 10-18, 21-25 recite abstract ideas.
Step 2A-Prong 2
As per MPEP § 2106.04, while the claims 1-6, 10-18, 21-25 recite additional limitations which are hardware or software elements such as one or more processors, a generative transformer model, one or more computerized attention mechanisms, a plurality of encoder layers, a plurality of decoder layers, wherein at least one of the encoder layers or decoder layers comprise a feed-forward neural network, a graphical user interface, one or more computer storage devices, a stock keeping unit (SKU), an encoder and a decoder, a memory communicatively coupled to the at least one processor, and a non-transitory computer-readable media, these limitations are not sufficient to qualify as a practical application being recited in the claims along with the abstract ideas since these elements are invoked as tools to apply the instructions of the abstract ideas in a specific technological environment. The mere application of an abstract idea in a particular technological environment and merely limiting the use of an abstract idea to a particular technological field do not integrate an abstract idea into a practical application (MPEP § 2106.05 (f) & (h)).
Evaluated individually, the additional elements do not integrate the identified abstract ideas into a practical application. Evaluating the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually.
The claims do not amount to a “practical application” of the abstract idea because they neither (1) recite any improvements to another technology or technical field; (2) recite any improvements to the functioning of the computer itself; (3) apply the judicial exception with, or by use of, a particular machine; (4) effect a transformation or reduction of a particular article to a different state or thing; (5) provide other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment.
Accordingly, claims 1-6, 10-18, 21-25 are directed to abstract ideas.
Step 2B
Claims 1-6, 10-18, 21-25 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination, do not amount to significantly more than the abstract idea.
The analysis above describes how the claims recite the additional elements beyond those identified above as being directed to an abstract idea, as well as why identified judicial exception(s) are not integrated into a practical application. These findings are hereby incorporated into the analysis of the additional elements when considered both individually and in combination.
For the reasons provided in the analysis in Step 2A, Prong 1, evaluated individually, the additional elements do not amount to significantly more than a judicial exception. Thus, taken alone, the additional elements do not amount to significantly more than a judicial exception.
Evaluating the claim limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. In addition to the factors discussed regarding Step 2A, prong two, there is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely amount to instructions to implement the identified abstract ideas on a computer.
Therefore, since there are no limitations in the claims 1-6, 10-18, 21-25 that transform the exception into a patent eligible application such that the claims amount to significantly more than the exception itself, the claims are directed to non-statutory subject matter and are rejected under 35 U.S.C. § 101.
Claim Rejections 35 U.S.C. §103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 16, 17, 18, 21, 22 are rejected under 35 U.S.C. § 103 as being taught by Hall,
(US 20230376981 A1), hereafter Hall, "Predictive Systems and Processes for Product Attribute Research and Development," in view of Sorensen (US20230056742A1), hereafter Sorensen, "In Store Computerized Product Promotion system with Product Prediction Model that Outputs a Target Product Message Based on Products Selected in A Current Shopping Session," in further view of Blohm, (US 8630989 B2), hereafter Blohm, "Systems and Methods for Information Extraction Using Contextual Pattern Discovery," and in further view of Nazarian, (US 20180285902A1), hereafter Nazarian, “System and Method for Data-Driven Insight into Stocking out-of-Stock Shelves.”
Regarding Claim 1, (Currently Amended) A method comprising:
receiving, by one or more processors, a first data set representing a plurality of first purchases of a plurality of first products by a plurality of first users, wherein the first data set comprises a plurality of first data strings, Hall teaches, (the model service generates a model 119 for predicting the sales volume of a particular product. The model service 107 can train the model 119 using a first training dataset, [0052], and the intake service 103 can analyze PoS data and identify product descriptions including product attributes 114, such as, for example, flavor, ingredients, or product form. The intake service 103 can extrapolate key features from syndicated sales data, such as, for example, shelf price, % All Commodity Volume (“ACV”)Distribution, brand name, packaging type, mass, and volume, [0046]),
each having a respective sequence of first tokens, wherein for each of the first data strings: the first data string represents a respective one of the first purchases, Hall teaches, (examples of product-related sales data include sale volumes, sale revenue, cost of sale, product profitability, product purchase transactions, product refunds, product exchanges, and financial data related to providing particular product attributes or engaging particular econometric or psychographic indicators, [0028]) and each of the first tokens of the first data string represents a respective one of the first products purchased in the respective one of the first purchases in a single transaction, Hall does not teach, Sorensen teaches, (the product prediction model 38 receives as input the one or more products selected by the shopper 12 during the current shopping session and outputs the identity 40 of the at least one target product 36, [0041], and the token embedding indicates the identity 40 of the target product 36, [0042]),
Hall and Sorensen are both considered to be analogous to the claimed invention because they are both in the field of product performance predictions. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning methods of Hall with the token sequences and attention mechanisms of Sorensen so that the transformer can use the attention mechanism to learn the contextual relationship between products selected by shoppers, Sorensen [0042]), and to direct individual consumers toward specific products that they may desire, [0001],
wherein for each of the first data strings, the first tokens of that first data string are arranged sequentially according to one or more first characteristics, Hall does not teach, Blohm teaches, (the sequence analysis component operates to prepare the information necessary to generate semantic signatures for the context associated with each relation candidate. A context string may be comprised of a sequence of words or tokens. Embodiments provide that the semantic signature of a context string may satisfy the following two conditions: (1) it should be the most informative subsequence(s) of tokens that identifies the relation candidate; and at the same time (2) it is shared by the most number of context strings of other relation candidates. The second condition implies that S2C may benefit from subsequence(s) of tokens in a semantic signature being a frequent sequence. As such, identifying frequent sequences of tokens may be necessary, Blohm, [8:62- 9:08]).
Hall and Blohm are both considered to be analogous to the claimed invention because they are both in the field of applied machine learning or knowledge engineering. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning methods of Hall with the flexible approach to token sequencing of Blohm to discovers textual patterns indicating relations manually, and then writes rules to incorporate the patterns observed, [4:20-22].
where the one or more first characteristics comprises at least one of
a price of a respective one of the first products, Hall teaches, (In one or more embodiments, the systems and processes leverage the analyses to generate predictions for a)product sales volume (e.g., at a particular price level, via a particular channel, at a particular location, over a particular time interval, with particular product attributes, or combinations thereof), b) the impact of price and other attribute changes on sales volume, and c) generating recommendations for product development strategy to propel product performance, Hall [0005]),
A purchase frequency of a respective one of the first products by the first users, or
A purchase frequency of a respective one of the first products by a respective one of the first users,
training, by the one or more processors, a generative transformer model including one or more computerized attention mechanisms using the first data set as an input; Hall does not teach, Sorensen teaches, (The embeddings are used as input into a transformer 404 that uses an attention mechanism to learn a contextual relationship between products selected by shoppers, [0042]), receiving, by one or more processors, a second data set comprising a second data string representing one or more second products selected by a second user for purchase, wherein the second data string comprises one or more second tokens, and wherein each of the one or more second tokens represents a respective one of the one or more second products; Sorensen teaches, ( the product prediction model 38 is given the task of predicting two products, Products D and J. Each of the products is assigned a token embedding, a positional embedding, and a trip embedding. The token embedding indicates the identity 40 of the target product 36. Because the product prediction model 38 is tasked with predicting the two products, a mask token is assigned to those products whose identities are not inputted. In this example, the training data set is a time-ordered list of products purchased by a shopper 12, and positional embeddings are assigned that indicate the order in which the products were chosen by the shopper 12, with E.sub.1 assigned to the first product, E.sub.2 assigned to the second product, etc., [0042]), providing, by the one or more processors, the second data set to the generative transformer model; outputting, by the one or more processors, a third data set generated by the generative transformer model based on the second data set, wherein the third data set represents a prediction of one or more third products for purchase by the second user, Hall teaches, (The model service 107 can train the model 119 using a first training dataset, a second training dataset, and a validating training dataset derived from a set of historical data 115 that is associated with products having similar attributes (e.g., the historical data including sales data, relevant econometric and/or psychographic indicators, and unstructured data, such as social media postings, customer reviews, and search data). The first training dataset can correspond to a first percentage of the set of historical data (e.g., 50%, 60%, 70%, or any suitable value), the second training dataset can correspond to a second percentage of the set of historical data (e.g., 5%, 10%,15%, or any suitable value), and the validation dataset van correspond to a third percentage of the set of historical data (e.g., 5%, 10%, 15%, or any suitable value) [0052]); and
storing, by the one or more processors, the third data set using one or more computer storage devices, Hall teaches, (The process 200 can include performing one or more data preparation processes 300 (FIG. 3)to process the product data 113 and historical data 115 (e.g., or other data elements from which product data 113 or historical data 115 may be extracted or derived, such as electronic records). In various embodiments, the intake service 103 and/or NLP service 105 store the processed product data 113 and historical data 115 at the data store 111, [0062]).
Hall and Sorensen are both considered to be analogous to the claimed invention because they are both in the field of product predictions. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning methods of Hall with the token sequences and attention mechanisms of Sorensen so that the transformer can use the attention mechanism to learn the contextual relationship between products selected by shoppers, Sorensen [0042]).
and causing a modification of a stock of products at a physical premises based on the third data set, including
causing an increase in a stock of a first subset of the products at the physical
premises based on the third data set, and
causing a decrease in a stock of a second subset of the products at the physical
premises based on the third data structure. Hall does not teach, Nazarian teaches, (modifying the current shelf inventory based on the notification, to yield an updated shelf inventory; [0003], generating, in real-time, schedules for restocking items on store shelfs such that the amount of items on the shelf maintain desired rates of sales per the prediction models, [Abstract], and notification that at least one unit of the product has been removed from the shelf (406) and modifies the current shelf inventory based on the notification, to yield an updated shelf inventory, [0035]).
Hall and Nazarian are both considered to be analogous to the claimed invention because they are both in the field of product predictions. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning methods of Hall with the inventory tracking of Nazarian to generate, in real-time, schedules for restocking items on store shelfs such that the amount of items on the shelf maintain desired rates of sales per the prediction models, Nazarian, [Abstract].
Claims 21 and 22 are rejected for reasons similar to those provided for Claim 1. In these claims, the addition of the system comprising at least one processor and a memory communicatively coupled to the at least one processor, Claim 21, and the addition of a non-transitory computer-readable media, claim 22, do not change the rational for the rejections under 35 U.S.C § 103 based on the referenced prior art.
Regarding claim 16, the method of claim 1, further comprising:
estimating, based on the third data set, a future stock level of the one or more third
products, Hall teaches, (the process 700 can include performing at least one action for the particular product, based on the prediction of the at least one product performance attribute. [ ] The prediction system 101 can generate a strategy report that outlines one or more actions based on the predictions based on the product performance attribute and the product data 113. For example, the prediction system 101 can generate the strategy report to recommend stocking amounts and stocking consistency to ensure the particular product does not sell out at the particular retailer, [0110]).
Regarding claim 17, The method of claim, 1 further comprising:
receiving a fourth data set, wherein the fourth data set represents one or more fourth
products; providing the fourth data set to the generative transformer model; obtaining a fifth third data set generated by the generative transformer model based on the fourth data set, wherein the fifth data set represents a prediction of one or more fifth products that are related to the one or more fourth products; and storing the fifth data set using one or more hardware storage devices.
Receiving a fourth data set and providing it to the generative transformer model to produce a fifth third data set, is a design choice with no benefit to the utility of the invention. Regardless, Hall teaches a first, second, and validating (third) training dataset derived from historical data associated with products having similar attributes, [0052], and transmitting the prediction 125 to one or more computing devices 102, storing the prediction 125 at the data store 111 or a remote storage environment, [0070]. It would be a design choice to continue iterations of predictions using the transformer model and it would be obvious for one of ordinary skill in the art to rearrange parts of an invention, in this case choose appropriate numbers and levels of predictions, to improve model prediction; Hall teaches, the model service 107 can modify the model towards improving model accuracy until an optimal model 119 is generated (e.g., the optimal model 119 meeting a predetermined accuracy and/or other performance threshold). The model service 107 can execute the optimal model 119 on various permutations 123 of product attributes, econometric indicators, and/or psychographic indicators to generate a plurality of product performance predictions. The model service 107 can identify an optimal permutation by determining the permutation 123 predicted to demonstrate the highest product performance (e.g., as measured in sales volume, revenue, demand, or any suitable performance metric), Hall - [0053], (MPEP 2144.04 I, 2144.04 VI (C)).
Regarding Claim 18, The method of claim 17, wherein the one or more fourth products is a subset of the one or more second products,
The decision to receive a data set representing a product and providing it to the transformer model to further refine a prediction based on product attributes or sales performance or other criteria is a design choice with no benefit to the utility of the invention. Hall teaches a first, second, and validating (third) training data set [0052] and that the intake service 103 can generate product data 113 and historical data 115. It would be a design choice to continue iterations of predictions using the transformer model and it would be obvious for one of ordinary skill in the art to rearrange parts of an invention, in this case choose appropriate numbers and levels of predictions, to improve model prediction; Hall teaches, The intake service 103 can perform data processing actions including, but not limited to, generating statistical metrics from data (e.g., standard deviation, quantiles, etc.), imputing, replacing, and removing data values (e.g., outlier values, null values, missing values, etc.), filtering data values, generating data values (e.g., based on product data 113 or historical data 115 ), encoding data from a first representation to a second representation, generating categorical features and data features, and other metadata, enriching product data (e.g., by associating product data with other product data and/or metadata, such as time, identification, and location information), generating multi-dimensional data representations, organizing data into one or more datasets (e.g., training datasets, testing datasets, validation datasets, experimental datasets, etc.), and segregating datasets into additional datasets, [0045].
Claim 2 is rejected under 35 U.S.C. § 103 as being taught by Hall, (US 20230139999 A1),
hereafter Hall, “Predictive Systems and Processes for Product Attribute Research and Development,” in in view of Sorensen (US 20230056742 A1), hereafter Sorensen, “In Store Computerized Product Promotion system with Product Prediction Model that Outputs a Target Product Message Based on Products Selected in A Current Shopping Session,” in further view of Blohm, (US 8630989 B2), hereafter Blohm, "Systems and Methods for Information Extraction Using Contextual Pattern Discovery, "in further view of Bhasin, (US 20200104865 A1), hereafter Bhasin, “System and Method for Predicting Future Purchases Based on Payment Instruments Used.”
Regarding claim 2, The method of claim 1, wherein the plurality of first users comprises the
second user, Hall does not teach, Bhasin teaches, (A purchase prediction engine 112D may include instructions to calculate probabilities of any given user's future purchase based on matching payment network account data 164A for the user and payment network system global transaction data 166A corresponding to a first user's transactions with existing purchases of second users (e.g., payment network system global transaction data 166A corresponding to other users) with similar purchase histories. In some embodiments, the purchase prediction engine 112D instructions to calculate probabilities may include instructions to implement machine learning (ML) or artificial intelligence (Al) techniques to continuously or periodically refine the probabilities of a first user's future purchases based on the first user's current purchase(s) and matching them to previous purchases (e.g., payment network system global transaction data 166A corresponding to the first user), [0031]).
Hall and Bhasin are both considered to be analogous to the claimed invention because they are both in the field of product predictions. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning methods of Hall with the purchase history techniques of Bhasin so that the outcome prediction is constantly refined to a more accurate result, [0034].
Claim 3, is rejected under 35 U.S.C. § 103 as being taught by Hall, (US 20230376981 A1),
hereafter Hall, “Predictive Systems and Processes for Product Attribute Research and Development,” in view of Sorensen (US20230056742A1), hereafter Sorensen, “In Store Computerized Product Promotion system with Product Prediction Model that Outputs a Target Product Message Based on Products Selected in A Current Shopping Session,” in further view of Blohm, (US 8630989 B2), hereafter Blohm, "Systems and Methods for Information Extraction Using Contextual Pattern Discovery, in further view of Lee (US 20210304121 A1), hereafter Lee, “Computerized Systems and Methods for Product Integration and Deduplication using Artificial Intelligence.”
Regarding claim 3, The method of 1, wherein the third data set comprises a third data string,
wherein the third data string comprises one or more third tokens, and wherein each of the one or more third tokens represents a respective one of the one or more third products, Hall does not teach, Lee teaches, (product information associated with a plurality of third products; tagging, using the second machine learning model, a plurality of keywords from product information associated with the plurality of third products; determining, using the second machine learning model, a plurality of second match scores between the plurality of third products, by using the tagged keywords associated with the plurality of third products; when any one of the plurality of second match scores is above the first predetermined threshold, determining, using the second machine learning model, that the third products associated with the second match score are identical and deduplicating the identical third products; and modifying the webpage to include deduplication of the identical third products, [0007] and The machine learning model may tokenize keywords by referencing a token dictionary stored in database 446 and implementing an Aho-Corasick algorithm to determine whether or not to split a keyword into multiple keywords,[0088]),
Hall and Lee are both considered to be analogous to the claimed invention because they are both in the field of product predictions. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning methods of Hall with the product deduplication methods of Lee to improve a consumer’s user experience, Lee [0003].
Claim 4, is rejected under 35 U.S.C. § 103 as being taught by Hall, (US 20230376981 A1),
hereafter Hall, “Predictive Systems and Processes for Product Attribute Research and Development,” in view of Sorensen (US20230056742A1), hereafter Sorensen, “In Store Computerized Product Promotion system with Product Prediction Model that Outputs a Target Product Message Based on Products Selected in A Current Shopping Session,” in further view of Blohm, (US 8630989 B2), hereafter Blohm, "Systems and Methods for Information Extraction Using Contextual Pattern Discovery, in further view of Lee (US 20210304121 A1), hereafter Lee, “Computerized Systems and Methods for Product Integration and Deduplication using Artificial Intelligence,” in further view of Yee, (US 20230042210 A1), hereafter Yee, “Seasonal Clothing Short-term Sales Prediction Method Based on Weather Forecast Information.”
Regarding Claim 4, The method of clam 3, wherein at least one of the first tokens, the one or more second tokens, or the one or more third tokens comprises:
an identifier representing a stock keeping unit (SKU) associated with a respective one of the first products, the one or more second products, or the one or more third products, Hall does not teach, Yee teaches, (Tokenization and keyword matching may also be used to identify the brand associated with a particular sale, using word-stemming and lemmatization for better hit rates, [0100], and the scraping module can advantageously capture granular details of a particular transaction contained in an electronic receipt in the user's email, including the date of the transaction, the price of product or products included in the transaction, a unique identifier of the product(s) (such as stock keeping-unit (SKU) number), and the brand of the product(s), [0074]).
Hall and Yee are both considered to be analogous to the claimed invention because they are both in the field of machine learning data processing. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning methods of Hall with the SKU data of Yee to advantageously capture granular details of a particular transaction contained in an electronic receipt, Yee [0074].
Claim 5, is rejected under 35 U.S.C. § 103 as being taught by Hall, (US 20230376981 A1),
hereafter Hall, “Predictive Systems and Processes for Product Attribute Research and Development,” in view of Sorensen (US20230056742A1), hereafter Sorensen, “In Store Computerized Product Promotion system with Product Prediction Model that Outputs a Target Product Message Based on Products Selected in A Current Shopping Session,” in further view of Blohm, (US 8630989 B2), hereafter Blohm, "Systems and Methods for Information Extraction Using Contextual Pattern Discovery, in further view of Lee (US 20210304121 A1), hereafter Lee, “Computerized Systems and Methods for Product Integration and Deduplication using Artificial Intelligence,” in further view of Robinson (US 4747072 A), hereafter Robinson, “Pattern Addressable Memory.”
Regarding claim 5, the method of clam 3, wherein at least one of the first tokens, the one or more second tokens, or the one or more third tokens comprises:
a respective token indicating a beginning of at least one of the first data strings, the one or
more second data strings, or the one or more third data strings, Hall does not teach, Robinson teaches, (instruction is provided which "jumps" these set tokens back to the beginning of the data sequences in which they appear, i.e., the token is moved back to the first (.sub.0 which appears above each set token. This marks the beginning of each matched data sequence, which facilitates the read out of the sequence, [11:47-53]).
Hall and Robinson are both considered to be analogous to the claimed invention because they are both in the field of data organization and processing. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning methods of Hall with the token data sequences of Robinson to facilitate the read out of the sequence, [11:52-53].
Claim 6 is rejected under 35 U.S.C. § 103 as being taught by Hall, (US 20230139999 A1),
hereafter Hall, “Predictive Systems and Processes for Product Attribute Research and Development,” in view of Sorensen (US20230056742A1), hereafter Sorensen, “In Store Computerized Product Promotion system with Product Prediction Model that Outputs a Target Product Message Based on Products Selected in A Current Shopping Session,” in further view of Blohm, (US 8630989 B2), hereafter Blohm, "Systems and Methods for Information Extraction Using Contextual Pattern Discovery, in further view of Lee (US 20210304121 A1), hereafter Lee, “Computerized Systems and Methods for Product Integration and Deduplication using Artificial Intelligence,” in further view of Das Sharma, (US 10915468 B2), hereafter Das Sharma, “Sharing Memory and I/O Services Between Nodes.”
Regarding claim 6, The method of clam 3, wherein at least one of the first tokens, the one or more second tokens, or the one or more third tokens comprises:
a respective token indicating an end of at least one of the first data strings, the one or more second data strings, or the one or more third data strings, Hall does not teach, Das Sharma teaches, (an EDS token is sent following the end of the SMI3 data window defined by SMI3 STP token 725. The EDS token can signal the end of the data stream and imply that an ordered set block is to follow, [14:12-15]).
Hall and Das Sharma are both considered to be analogous to the claimed invention because they are both in the field of product predictions. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning methods of Hall with the token organization of Sharma to benefit from better energy efficiency and energy conservation, [2:62-63]).
Claim 10 is rejected under 35 U.S.C. § 103 as being taught by Hall, (US 20230139999 A1),
hereafter Hall, “Predictive Systems and Processes for Product Attribute Research and Development,” in view of Sorensen (US20230056742A1), hereafter Sorensen, “In Store Computerized Product Promotion system with Product Prediction Model that Outputs a Target Product Message Based on Products Selected in A Current Shopping Session,” in further view of Blohm, (US 8630989 B2), hereafter Blohm, "Systems and Methods for Information Extraction Using Contextual Pattern Discovery, in further view of Lee (US 20210304121 A1), hereafter Lee, “Computerized Systems and Methods for Product Integration and Deduplication using Artificial Intelligence” in further view of Ding, (CN 114169914 A), hereafter Ding, “Features Construction Method For Article Sales Prediction, Device And Storage Medium.”
Regarding claim 10, The method of claim 3, wherein the first data set comprises first embedded data representing a respective time at which each of the first products was purchased by the first users, Hall does not teach, Ding teaches, (In one embodiment, the fifth characteristic data is a plurality of, the one for article sales prediction of characteristic constructing device further comprises filter module, for obtaining the importance of each fifth characteristic data; according to the importance degree of each fifth characteristic data from multiple fifth characteristic data filter the target fifth characteristic data; constructing module 406 is specifically used for according to the first characteristic data, the first differential characteristic data, the second characteristic data, the second differential characteristic data, the fourth characteristic data and the target fifth characteristic data to construct a time sequence characteristic set of article sales prediction.
For specific definition about characteristic constructing device for article sales prediction can be referred to in the above for article sales prediction of the characteristic constructing method, here will not be repeated here. Each module in the feature building device for article sales prediction can be all or partially implemented by software, hardware and combinations thereof. The modules can be embedded in the hardware form or independent from the processor in the computer device, also can be stored in the memory of the computer device in software form, so as to facilitate the processor to perform the operation corresponding to each module, [Ding, p. 14]).
Hall and Ding are both considered to be analogous to the claimed invention because they are both in the field of applied machine learning for sales prediction. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning methods of Hall with the embedded data approach of Ding to enable to construct time sequence characteristic set of article sales prediction, [Ding, p.15].
Claim 11 is rejected under 35 U.S.C. § 103 as being taught by Hall, (US 20230139999 A1),
hereafter Hall, “Predictive Systems and Processes for Product Attribute Research and Development,” in view of Sorensen (US20230056742A1), hereafter Sorensen, “In Store Computerized Product Promotion system with Product Prediction Model that Outputs a Target Product Message Based on Products Selected in A Current Shopping Session,” in further view of Blohm, (US 8630989 B2), hereafter Blohm, "Systems and Methods for Information Extraction Using Contextual Pattern Discovery, in further view of Lee (US 20210304121 A1), hereafter Lee, “Computerized Systems and Methods for Product Integration and Deduplication using Artificial Intelligence,” in further view of Ding, (CN 114169914 A), hereafter Ding, “Features Construction Method For Article Sales Prediction, Device And Storage Medium,” in further view of Bordawekar, (US 11429579 B2), hereafter Bordawekar, “Building a Word Embedding Model to Capture Relational Data Semantics.”
Regarding claim 11, The method of claim 10, wherein the first tokens comprise the embedded data, Hall does not teach, Bordawekar teaches, (the machine learning environment may apply one or more embedding techniques to all of the weighted unordered group of string tokens to determine meaning vectors for identifiers of each row of the relational database. For example, a first-string token having a higher weight than a second-string token may be assigned a greater contribution than the second-string token within an associated unordered group of string tokens during the one or more embedding techniques, [10:51-59]).
Hall and Bordawekar are both considered to be analogous to the claimed invention because they are both in the field of applied machine learning. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning methods of Hall with the data embedding techniques of Bordawekar to improve an understanding of existing relationships between data within a relational database, [11:29-31].
Claim 12 is rejected under 35 U.S.C. § 103 as being taught by Hall, (US 20230139999 A1),
hereafter Hall, “Predictive Systems and Processes for Product Attribute Research and Development,” in view of Sorensen (US20230056742A1), hereafter Sorensen, “In Store Computerized Product Promotion system with Product Prediction Model that Outputs a Target Product Message Based on Products Selected in A Current Shopping Session,” in further view of Blohm, (US 8630989 B2), hereafter Blohm, "Systems and Methods for Information Extraction Using Contextual Pattern Discovery, in further view of Lee (US 20210304121 A1), hereafter Lee, “Computerized Systems and Methods for Product Integration and Deduplication using Artificial Intelligence,” in further view of Ding, (CN 114169914 A), hereafter Ding, “Features Construction Method For Article Sales Prediction, Device And Storage Medium,” in further view of Marchetti, (US 12158884 B1), hereafter Marchetti, “Embedded Tokens For Searches in Rendering Dashboards.”
Regarding claim 12, The method of claim 10, wherein the embedded data and the first tokens are represented by different respective data structures Hall does not teach, Marchetti teaches, (a first request to display a dashboard associated with a dashboard definition including an embedded search query, wherein a first token is associated with the embedded search query, transmitting a second request to execute the embedded search query on a dataset generated by a data source, [claim 1], and (120) The data source 1002 of the computing environment 1000 is a component of a computing device that produces machine data. [ ] The component can produce machine data in an automated fashion (e.g., through the ordinary course of being powered on and/or executing) and/or as a result of user interaction with the computing device (e.g., through the user's use of input/output devices or applications). The machine data can be structured, semi-structured, and/or unstructured, [(120), and To achieve greater efficiency, the search system 1060 can apply map-reduce methods to parallelize searching of large volumes of data. Additionally, because the original data is available, the search system 1060 can apply a schema to the data at search time. This allows different structures to be applied to the same data, or for the structure to be modified if or when the content of the data changes.
[25:63-26:25]).
Hall and Marchetti are both considered to be analogous to the claimed invention because they are both in the field of applied machine learning. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning methods of Hall with the data embedding techniques of Marchetti so that the indexing system does not need to be provided with a schema describing the data, [26:44-45]).
Claims 13 and 14 are rejected under 35 U.S.C. § 103 as being taught by Hall, (US
20230139999 A1), hereafter Hall, “Predictive Systems and Processes for Product Attribute Research and Development,” in view of Sorensen (US20230056742A1), hereafter Sorensen, “In Store Computerized Product Promotion system with Product Prediction Model that Outputs a Target Product Message Based on Products Selected in A Current Shopping Session,” in further view of Bhasin (CN 110969477 A), hereafter Bassin, “Based On A Payment Instrument System And Method Used To Predict Future Purchase.”
Regarding claim 13, The method of claim 1, wherein the second data set represents one or more second products selected by the second user for purchase at an on-line merchant, Hall does not teach, Bassin teaches, (smaller digital commerce provider of online retailers lack a capability, i.e., know the customer may purchase in the future based on past behavior, [p.2], the method first payment device may be used at the merchant computer system receives the first purchase transaction data of the first purchase transaction, the first purchase transaction data comprising a first product in the first payment instrument fingerprint and the plurality of products of the first product identifier, and determining the first payment instrument refers to the prediction data of the fingerprint. Prediction data may include a second product identifier of the second product in the plurality of products corresponding to the first payment instrument in the second purchase transaction after the first purchase transaction refers to the probability of the fingerprint. [Summary, p.2]).
Hall and Bassin are both considered to be analogous to the claimed invention because they are both in the field of applied machine learning. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning methods of Hall with the second product on-line tracking techniques of Bassin to determine the probability of particular future purchase after the current purchase, [Abstract]).
Regarding claim 14, The method of claim 1, wherein the second data set represents one or more second products selected by the second user for purchase at a physical merchant, Hall does not teach, Bassin teaches, (the system can be based on global transaction data of the payment device associated with a payment network system to predict future transactions of the first payment device, transaction data may correspond to a purchase transaction between the multiple client computer systems with a plurality of merchant computer system, [ ] the first purchase transaction data can include a first payment instrument fingerprint and identifying a first product for a first product in the plurality of products. These instructions also may include determining the first payment instrument refers to the prediction data of the fingerprint. prediction data may include a second product identifier of the second product in the plurality of products corresponding to the first payment instrument in the second purchase transaction after the first purchase transaction refers to the probability of the fingerprint, [Bassin, p.2].
Hall and Bassin are both considered to be analogous to the claimed invention because they are both in the field of applied machine learning. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning methods of Hall with the second product on-line tracking techniques of Bassin to determine the probability of particular future purchase after the current purchase, [Abstract]).
Claim 15 is rejected under 35 U.S.C. § 103 as being taught by Hall, (US 20230139999
A1), hereafter Hall, “Predictive Systems and Processes for Product Attribute Research and Development,” in view of Sorensen (US20230056742A1), hereafter Sorensen, “In Store Computerized Product Promotion system with Product Prediction Model that Outputs a Target Product Message Based on Products Selected in A Current Shopping Session,” in further view of Blohm, (US 8630989 B2), hereafter Blohm, "Systems and Methods for Information Extraction Using Contextual Pattern Discovery, in further view of Huang (US 11861674 B1), hereafter Huang, “Method, One Or More Computer-readable Non-transitory Storage Media, And A System For Generating Comprehensive Information For Products Of Interest By Assistant Systems.”
Regarding Claim 15, The method of claim 1, further comprising:
causing a message to be presented to the second user, wherein the message comprises an
indication of at least some of the one or more third products, Hall does not teach Huang teaches, (The assistant system 140 may additionally provide more information regarding shopping a product of interest and a virtual try-on function of the product so that the user can make a better decision. Additionally, the assistant system 140 may find related products that the user may be interested in. The assistant system 140 may assist a user both reactively and proactively, [32:16-23], in particular embodiments, the assistant system may further send, to the client system, instructions for presenting the one or more social summaries of the one or more products of interest, [3:51-54]).
Hall and Huang are both considered to be analogous to the claimed invention because they are both in the field of applied machine learning for product predictions. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning methods of Hall with the second product notification techniques of Huang to improve users' shopping experience via assistant-powered client systems (e.g., smart phone, AR/VR glasses, smart display), Huang, [3:1- 4].
Conclusion
Claims 23, 24, and 25 are not rejected by prior art under 35 U.S.C. § 103.
The closest prior art to the invention includes Blohm, (US 8630989 B2), hereafter Blohm, “Systems and Methods for Information Exchange Using Contextual Pattern Discovery,” and Trinh, (US 20210390609 A1), “System and Method for E-Commerce Recommendations.” None of the prior art alone or in combination teach the claimed invention as recited in this claim wherein the novelty is in the combination of all the limitations and not in a single limitation.
Regarding claim 23, The method of claim 1, wherein for each of the first data strings, the first tokens of that first data string are arranged sequentially according to the price of a respective one of the first products.
Regarding claim 24, The method of claim 1, wherein for each of the first data strings, the first tokens of that first data string are arranged sequentially according to the purchase frequency of the respective one of the first products by the first users.
Regarding claim 25, The method of claim 1, wherein for each of the first data strings, the first tokens of that first data string are arranged sequentially according to the purchase frequency of a respective one of the first products by a respective one of the first users.
Blohm teaches, (A context string may be comprised of a sequence of words or tokens. Embodiments provide that the semantic signature of a context string may satisfy the following two conditions: (1) it should be the most informative subsequence(s) of tokens that identifies the relation candidate; and at the same time (2) it is shared by the most number of context strings of other relation candidates. The second condition implies that S2C may benefit from subsequence(s) of tokens in a semantic signature being a frequent sequence. As such, identifying frequent sequences of tokens may be necessary, Blohm, [8:65-9:08].
Embodiments provide that sequence analysis may be performed to identify candidate
subsequences of tokens for set of relation candidate C through the following steps: (1) each given context string str.sub.e is first tokenized into a token sequence s; (2) all subsequences of s are obtained and counted; (3) subsequences with frequency lower than freq.sub.min are removed; and (4)subsequences not within the threshold of sequence size (size.sub.(min,max)) are also removed. These steps may define a frequent sequence set S(C, freq.sub.min,size.sub.(min,max)), [9:09-17].
Trinh teaches, (a buyer data interface 302A and an seller data interface 302B provide text inputs that preferably are also analyzed with the tokenizers in 318A and 318B. A tokenizer is able to break down the text inputs into parts of speech. It is preferably also able to stem the words. For example, running and runs could both be stemmed to the word run. This tokenizer information is then fed into an AI engine in 306and a match output 304 is provided by the AI engine, Trinh, [0065], the Convolutional Neural Network (CNN) may use entire sequences of session clickIDs, but preferably CNN 308 instead features deep 3D convolutional neural networks with 13 layers, to incorporate both ID information and textual features such as item's name, item's categories, [0067], the inputs are processed by the AI system 408, which creates a buyer's session sequence. The sequence may be related to a single session or alternatively may include inputs from multiple sessions with that buyer; the next recommendation for the buyer's session is then determined, and a suggested product or service is then provided, [0072].
Specific token sequences based on price, purchase frequency, or purchase frequency by a respective one of the first users were not taught. These individually or in combination did not teach the complete scope of the claim.
The prior art made of record and not relied upon is considered pertinent to applicant's
disclosure or directed to the state of the art is listed on the enclosed PTO-892.
Any inquiry concerning this communication or earlier communications from the
examiner should be directed to MICHAEL BOROWSKI whose telephone number is (703)756-1822. The examiner can normally be reached M-F 8-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached on (571) 272-6787. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000.
/MB/
Patent Examiner, Art Unit 3624
/MEHMET YESILDAG/Primary Examiner, Art Unit 3624