Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office action has been issued in response to amendment filed on 12/30/2025, Claims (1, 3-18), 19 and 20 are pending. Applicants' arguments have been carefully and respectfully considered and addressed. Accordingly, this action has been made FINAL necessitated by amendment.
Claims (1, 3-18), 19 and 20 are presented for examination.
Response to Arguments
Applicants' arguments have been carefully and respectfully considered and addressed. The arguments presented are moot based on amendment.
With regards to 101 rejections, the arguments have been fully considered and are persuasive, therefore; the rejection is withdrawn.
With regards to arguments that pertain to 103 rejection, Applicant arguments and amendment were fully considered and are moot in view of the new ground rejection wherein Glesinger et al. US Patent Application Publication US 20240104305 A1 (hereinafter Glesinger) and in view of Crabtree et al. US Patent Application Publication US 20240386015 A1 (hereinafter Crabtree) and further in view of Tran et al. US Patent Application Publication US 12399890 B2 (hereinafter Tran) and further in view of Chen, Chen et al. Foreign Patent Application Publication CN 112329444 B (hereinafter Chen) for teaching the amended claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Glesinger et al. US Patent Application Publication US 20240104305 A1 (hereinafter Glesinger) and in view of Crabtree et al. US Patent Application Publication US 20240386015 A1 (hereinafter Crabtree) and further in view of Tran et al. US Patent Application Publication US 12399890 B2 (hereinafter Tran) and further in view of Chen, Chen et al. Foreign Patent Application Publication CN 112329444 B (hereinafter Chen).
Regarding claim 1, Glesinger teaches A method comprising: obtaining a model trained with…understanding of natural language in relation to neural network architectures ([0096-0097], [0168], [0181], [0202] wherein Glesinger describes using different models and multi-modal process data and generates a representation of generalized semantic) providing input information to the model, the input information comprising at least one of the following (Abstract, [0263] wherein Glesinger describes providing information to an inference and relevancy models function to perform inferences from the information) natural language input information; and neural network architecture input information ([0026], [0157] wherein Glesinger applies natural language processing techniques to text strings such as words, sentences and phrases)
Glesinger teaches multi-modal ([0096-0097], [0168], [0181], [0202] wherein Glesinger describes using different models and multi-modal process data and generates a representation of generalized semantic).
Glesinger does not teach a bi-modal understanding of natural language; using the model to process the input information to generate inference information; a similarity evaluator for processing encoded representations to determine a similarity measure using a cosine similarity metric.
However in analogous art of Bi-modal understanding of natural language and neural architectures, Crabtree teaches bi-modal understanding of natural language (FIG. 28, [0091], [0096], [0155], [0365], [0367], [0420] wherein Crabtree incorporates cross-modal computing system for synchronizing representations such as text, images, audio, etc.) using the model to process the input information to generate inference information ([0037], [0091], [0107], [0112-0113] wherein Crabtree describes processing input data using models and generates inference information) a similarity evaluator for processing encoded representations to determine a similarity measure using a cosine similarity metric ([0160] wherein Crabtree describes embedding/semantic representation approaches that may utilize linguistic structures such as sequential (L2R and R2L), constituents, and dependency trees. Similarity metrics which can be used to assess such embedding/semantic representation approaches can include, but are not limited to, cosine similarity, dot product, ICM.sub.ß, and Euclidean similarity).
It would have been obvious to a person in the ordinary skill in the art before the effective filing date of the claimed invention to combine Glesinger with Crabtree by incorporating the method of a bi-modal understanding of natural language; using the model to process the input information to generate inference information; a similarity evaluator for processing encoded representations to determine a similarity measure using a cosine similarity metric of Crabtree into the method of obtaining a model trained with…understanding of natural language in relation to neural network architectures of Glesinger for the purpose of integrating a semantic search system with an AI platform to provide advanced search capabilities by leveraging automatically generated ontologies and knowledge graphs and employs natural language processing, machine learning, and large language models to create, update, and align ontologies from diverse data sources. (Crabtree: Abstract).
Glesinger does not teach a text encoder to process natural language information to generate word embeddings; a neural network architecture encoder to process neural network architecture information to generate graph encodings; a cross transformer encoder to process the word embeddings and the graph encodings to generate joint embeddings.
However in analogous art of Bi-modal understanding of natural language and neural architectures, Tran teaches a text encoder to process natural language information to generate word embeddings (Claims 11 and 17 text, paragraphs [0006-0007] wherein Tran a method for training a neural network and query commands that comprises natural language and generating a plurality of node embeddings corresponding to the plurality of nodes based at least in part on the at least one edge for the source structured representation using a graph encoder; generating a plurality of text embeddings representing the at least one modification command using a text encoder; generating, using a feature fusion network, combined embedding representing the query and the at least one modification command based on the plurality of node embeddings and the plurality of text embeddings; generating a modified structured representation by decoding the combined embedding, wherein the modified structured representation includes an updated plurality of nodes and an updated plurality of edges representing the query with the change indicated by the at least one modification command) a neural network architecture encoder to process neural network architecture information to generate graph encodings (Claim 17 text, paragraphs [0003], [0006-0007], [0017] wherein Tran generates a plurality of node embeddings corresponding to the plurality of nodes based at least in part on the at least one edge for the source structured representation using a graph encoder) a cross transformer encoder to process the word embeddings and the graph encodings to generate joint embeddings (Claims 5, 10-11, 13, 17 text, [0020], [0048], [0068], [0073], [0125] wherein Tran incorporates transformer and cross-attention and applying a transformer network to a plurality of combined embeddings including the plurality of node embeddings and the plurality of text embeddings)
It would have been obvious to a person in the ordinary skill in the art before the effective filing date of the claimed invention to combine Glesinger with Tran by incorporating the method of a text encoder to process natural language information to generate word embeddings; a neural network architecture encoder to process neural network architecture information to generate graph encodings; a cross transformer encoder to process the word embeddings and the graph encodings to generate joint embeddings of Tran into the method of obtaining a model trained with…understanding of natural language in relation to neural network architectures of Glesinger for the purpose of incorporating a feature fusion network to produce combined features based on the structured representation features and the natural language expression features. (Tran: [0006]).
Glesinger does not teach a pooling module to pool the joint embeddings to generate encoded representations comprising fixed-size one-dimensional (1D) representations.
However in analogous art of Bi-modal understanding of natural language and neural architectures, Chen teaches a pooling module to pool the joint embeddings to generate encoded representations comprising fixed-size one-dimensional (1D) representations (Claims 1, 3 text, page. 1, paragraphs 1-5, page. 2, paragraph 1 wherein Chen constructs a combined graph and embedding of joint graph and text, and training of double-branch convolution neural network model, and detects of fusion propagation structure and text. Wherein a text branch and node branch are provided as input and uses the one-dimensional convolution).
It would have been obvious to a person in the ordinary skill in the art before the effective filing date of the claimed invention to combine Glesinger with Chen by incorporating the method of a pooling module to pool the joint embeddings to generate encoded representations comprising fixed-size one-dimensional (1D) representations of Chen into the method of obtaining a model trained with…understanding of natural language in relation to neural network architectures of Glesinger for the purpose of incorporating modelling the propagation structure of each news as a propagation tree; using the propagation tree structure to construct a joint graph; embedding the text of the joint graph and news; training double-branch convolution neural network Inferring and predicting unknown samples. (Chen: page. 1, paragraph 2).
Regarding claim 3, Glesinger as modified by Crabtree, Tran and Chen teaches wherein: the text encoder comprises: a tokenizer to process natural language information to generate a sequence of tokens ([0335], [0337], wherein Glesinger incorporates a trained transformer-based model that translates the text-based embeddings into tokens wherein the embeddings and composite structures thereof may be specially formed to efficiently capture temporal and spatial aspects that facilitate coherency within generated content elements comprising temporally sequenced elements), ([0188], [0194] wherein Crabtree describes tokenization of input representation) and a word embedder to process the sequence of tokens to generate word embeddings ([0157], [0195], [0263], [0335-0336] wherein Glesinger generates text embeddings and tokenizes the elements).
Regarding claim 4, Glesinger as modified by Crabtree, Tran and Chen teaches wherein: the neural network architecture encoder comprises: a graph generator to process the neural network architecture information to generate a graph comprising a plurality of nodes, a plurality of edges, and a plurality of shapes (Abstract, [0029],[0091], [0155], [0164], [0166-0167] wherein Crabtree generates knowledge graphs by determining the attributes associated with each object, such as color, size, or pose; and relationship prediction: inferring the relationships between the detected objects based on their spatial arrangement, context, and semantic understanding, wherein the knowledge graph comprises nodes representing entities, concepts, and relationships, and edges representing the connections between them) a shape embedder to process the plurality of shapes to generate shape embeddings ([0375], [0377], [0405] wherein Glesinger captures the semantic structure and clustering, dimensionality reduction, and/or rule mining to distill the embeddings into interpretable symbolic forms, wherein Glesinger analyzes the principal components or dimensions to understand the most significant factors contributing to the variance in the embedding space in order to interpret the dimensions or components in terms of their semantic meaning or the attributes they capture) a node embedder to process the plurality of nodes to generate node embeddings ([0247], [0271] wherein Crabtree describes nodes embeddings) a summation module to sum the shape embeddings and node embeddings to generate a shape-node summation ([0236-0237], [0246-0247] wherein Crabtree describes a process that comprises embedding creation and summarizations) and a graph attention network (GAT) for processing the summation and the plurality of edges to generate a graph encoding ([0152] wherein Crabtree includes A Graph Attention Network (GAT) is a type of neural network architecture designed to operate on graph-structured data).
Regarding claim 5, Glesinger as modified by Crabtree, Tran and Chen teaches wherein obtaining the model comprises: providing a training dataset comprising: a plurality of positive training samples, each positive training data sample comprising neural network architecture information associated with natural language information descriptive of the neural network architecture information; and a plurality of negative training samples, each negative training data sample comprising neural network architecture information associated with natural language information not descriptive of the neural network architecture information; and training the model, using supervised learning, to ([0150-0153], [0394] wherein Crabtree incorporates a neural network architecture information associated with natural language information and wherein input data is vectorized using an embedding model and stored in a vector database and vectorizing the data allows it to be used as input for processing by a neural network, wherein the neural network is trained using input data to learn patterns and relationships in the data and wherein the positive and negative user’s feedback is used to adjust the models) maximize a similarity measure generated between the neural network architecture information and the natural language information of the positive training samples; and minimize the similarity measure generated between the neural network architecture information and the natural language information of the negative training samples ([0198] wherein Crabtree describes a dense vector representation, also known as a dense embedding or a continuous vector representation, is a way of representing data, particularly words or tokens, as dense vectors in a high-dimensional continuous space and the context of natural language processing (NLP) and language models, dense vector representations are used to capture semantic and syntactic information about words or tokens and capturing fine-grained relationships and similarities between words), [0304], [0307], [0321] wherein Glesinger applies vector similarity calculations to the elements of the multi-modal latent space).
Regarding claim 6, Glesinger as modified by Crabtree, Tran and Chen teaches further comprising generating a neural network architecture database by, for each of a plurality of neural network architecture information data samples: processing the neural network architecture information data sample, using the model, to generate an encoded representation of the neural network architecture information data sample; and storing the neural network architecture information data sample in the neural network architecture database in association with the encoded representation of the neural network architecture information data sample ([0090-0091], [0139], [0143-0144], [0230], [0134], [0308] wherein Crabtree comprises various databases including knowledge graph database, and wherein Crabtree creates, updates, and aligns and evolves ontologies and curate ontological data from diverse data sources while also creating vector semantic indices and traditional database indices, wherein Crabtree comprises a set of neural network models that generate vector embeddings representing input data elements. The embeddings are stored in databases).
Regarding claim 7, Glesinger as modified by Crabtree, Tran and Chen teaches wherein: the input information comprises natural language input information comprising a textual description of a first neural network architecture; and the inference information comprises neural network architecture information corresponding to a neural network architecture similar to the first neural network architecture ([0091], [0142], [0393-0394] wherein Crabtree comprises a set of neural network models that generate vector embeddings representing input data elements, wherein Crabtree applies reasoning techniques to the symbolic representations to perform reasoning tasks), ([0161], [0192], [0196], [0204] wherein Glesinger applies attention-based neural networks that can infer context within the content in which the subject term is to be applied and/or from inferences derived from user behavior).
Regarding claim 8, Glesinger as modified by Crabtree, Tran and Chen teaches wherein using the model to process the input information to generate the inference information comprises: processing the input information, using the model, to generate an encoded representation of the input information; for each of a plurality of the encoded representations of the neural network architecture information data samples of the neural network architecture database ([0098], [0158], [0203], [0213-0224], [0237], [0240], [0246], [0352] wherein Crabtree describes embedding generation techniques that convert data into dense vector representations) using the model to generate a similarity measure between the encoded representations of: the neural network architecture information data sample; and the input information ([0160], [0240] wherein Crabtree describes Similarity metrics which can be used to assess such embedding/semantic representation approaches can include, but are not limited to, cosine similarity), ([0168], [0263], [0304] wherein Glesinger converts chains to numeric-based representations by a vector embedding process such as a process that applies trained neural networks (e.g., Large Language Models or LLMs) and OTAVs are then generated by comparing the resulting vectors to the vectors generated by a similar vector embedding process applied to topics, the comparison being performed by the application of vector similarity evaluation methods such as cosine similarity) selecting from the neural network architecture database a neural network architecture information data sample associated with an encoded representation having a high value of the similarity measure; and generating the inference information based on the selected neural network architecture information data sample ([0019], [0061], [0163], [0177-0178], [0261], [0264-0265], [0307-0310], [0338] wherein Glesinger describes the inferences and relevancy models function that applies a temporally integrated, multi-modal latent space, which is generated by application of a trained neural network), ([0107], [0189], [0310] wherein Crabtree describes a generation of the content elements may be further personalized based upon inferences of preferences)
Regarding claim 9, Glesinger as modified by Crabtree, Tran and Chen teaches wherein: the input information comprises: natural language input information comprising a textual description; and neural network architecture input information corresponding to a first neural network architecture; and the inference information comprises Boolean information indicating whether the textual description is descriptive of the first neural network architecture ([0221] wherein Crabtree incorporates a hot encoding that is a common technique used to represent categorical variables, such as words in a vocabulary, as binary vectors. In one-hot encoding, each word is represented by a vector with a length equal to the size of the vocabulary. The vector consists of zeros in all positions except for a single position, which is set to one, indicating the presence of the corresponding word. The input word is one-hot encoded with a 1 at the corresponding word index).
Regarding claim 10, Glesinger as modified by Crabtree, Tran and Chen teaches wherein using the model to process the input information to generate the inference information comprises: processing the natural language input information, using the model, to generate an encoded representation of the natural language input information; processing the neural network architecture information, using the model, to generate an encoded representation of the neural network architecture information; using the model to generate a similarity measure between the encoded representations of the neural network architecture information and the natural language information; and generating the inference information based on the similarity measure ([0090-0091], [0139], [0143-0144], [0230], [0134], [0308] wherein Crabtree comprises various databases including knowledge graph database, and wherein Crabtree creates, updates, and aligns and evolves ontologies and curate ontological data from diverse data sources while also creating vector semantic indices and traditional database indices, wherein Crabtree comprises a set of neural network models that generate vector embeddings representing input data elements. The embeddings are stored in databases), ([0160], [0240] wherein Crabtree describes Similarity metrics which can be used to assess such embedding/semantic representation approaches can include, but are not limited to, cosine similarity), ([0168], [0263], [0304] wherein Glesinger converts chains to numeric-based representations by a vector embedding process such as a process that applies trained neural networks (e.g., Large Language Models or LLMs) and OTAVs are then generated by comparing the resulting vectors to the vectors generated by a similar vector embedding process applied to topics, the comparison being performed by the application of vector similarity evaluation methods such as cosine similarity).
Regarding claim 11, Glesinger as modified by Crabtree, Tran and Chen teaches generating an answer database by, for each of a plurality of answer data samples, each answer data sample comprising natural language information: processing the answer data sample, using the model, to generate an encoded representation of the answer data sample; and storing the answer data sample in the neural network architecture database in association with the encoded representation of the answer data sample ([0186], [0195], [0247] wherein Crabtree provides ability for the system to infer the degree of similarity, and the attribute dimensions of similarity, between a pair of objects or events that enables the system to answer questions, wherein Crabtree categorizes objects into taxonomies enables the system to answer questions such as, “What color can balls be?” to which the system might reply, “Balls can at a minimum be white or orange since baseballs can be those colors and a baseball is a type of ball.” Such deductions by the system can be accomplished through semantic chaining of generalized semantics chains with other chains, directly symbolically or through the application of proxy vector-based representations).
Regarding claim 12, Glesinger as modified by Crabtree, Tran and Chen teaches wherein: the input information comprises: natural language input information comprising a question; and neural network architecture input information corresponding to a first neural network architecture; and the inference information comprises an answer data sample selected from the answer database, the selected answer data sample being responsive to the question ([0092] wherein Crabtree incorporates the knowledge graph enables complex reasoning tasks such as entity disambiguation, question answering, and recommendation. For instance, when a user searches for ‘apple’, the system can disambiguate between the fruit and the technology company by analyzing the context and relationships in the knowledge graph and in the vector semantics index), ([0182], [0196] wherein Glesinger describes selecting answers for questions).
Regarding claim 13, Glesinger as modified by Crabtree, Tran and Chen teaches wherein using the model to process the input information to generate the inference information comprises: processing the neural network architecture input information and natural language input information, using the model, to generate a joint encoded representation of the neural network architecture input information and natural language input information; for each of a plurality of the encoded representations of the answer data samples of the answer database: using the model to generate a similarity measure between: the encoded representation of the answer data sample; and the joint encoded representation of the neural network architecture input information and natural language the information; selecting from the answer database an answer data sample associated with an encoded representation having a high value of the similarity measure; and generating the inference information based on the selected answer data sample ([0157], [0181], [0195], [0252], [0260], [0264], [0335] wherein Glesinger describes transformer-type deep learning models, wherein textual-based descriptions that serve as inputs and are then generated into video-based output, a two-stage process may be applied in which a trained transformer-based model generates one or more vectorized embeddings of the text, and which may be weighted by the relative quantified contributions to value of each embedded thematic elements or concepts, and then a trained transformer-based model translates the text-based embeddings into video tokens that are applied to generate a video. The embeddings and composite structures thereof may be specially formed to efficiently capture temporal and spatial aspects that facilitate coherency within generated content elements comprising temporally sequenced elements), ([0160] wherein Crabtree performs objective scoring and ranking for various embedding and/or semantic representation approaches that includes symbolic paradigm, vector space model, count-based language models, neural language models, and compositional distributional approaches, wherein Similarity metrics is used to assess such embedding/semantic representation approaches htat can include cosine similarity, wherein Crabtree offers a plurality of common datasets on which to evaluate these embedding/semantic representation approaches to perform objective scoring and ranking and provides an iterative multi-dimensional optimization and evaluation process to explore the relative performance of the different techniques, data sets, and “fitness of purpose” definitions).
Regarding claim 14, Glesinger as modified by Crabtree, Tran and Chen teaches wherein: the input information comprises: a first neural network architecture information data sample corresponding to a first neural network architecture; and a second neural network architecture information data sample corresponding to a second neural network architecture; the inference information comprises similarity information indicating a degree of semantic similarity between the first neural network architecture and the second neural network architecture ([0091], [0338], [0436] wherein Crabtree uses indices for linking vectorized data element representations to ontology elements are created and iteratively refined using contextual information from comparisons between ontological data from knowledge graphs containing facts, entities, and relations using at least vector similarity comparison as part of a comparative objective function for relevance. This iterative refinement process allows the system to continuously learn and improve the accuracy and relevance of its links between vector semantic representations and ontological representations of data and to add to and curate multiple structured and even symbolic representations of data elements into effective knowledge corpora for specialized and broad-based search, reasoning and model training or utilization).
Regarding claim 15, Glesinger as modified by Crabtree, Tran and Chen teaches wherein using the model to process the input information to generate the inference information comprises: processing the first neural network architecture information data sample, using the model, to generate an encoded representation of the first neural network architecture information data sample; processing the second neural network architecture information data sample, using the model, to generate an encoded representation of the second neural network architecture information data sample; using the model to generate a similarity measure between the encoded representations of the first neural network architecture information data sample and the second neural network architecture information data sample; and generating the inference information based on the similarity measure ([0091], [0338], [0436] wherein Crabtree uses indices for linking vectorized data element representations to ontology elements are created and iteratively refined using contextual information from comparisons between ontological data from knowledge graphs containing facts, entities, and relations using at least vector similarity comparison as part of a comparative objective function for relevance. This iterative refinement process allows the system to continuously learn and improve the accuracy and relevance of its links between vector semantic representations and ontological representations of data and to add to and curate multiple structured and even symbolic representations of data elements into effective knowledge corpora for specialized and broad-based search, reasoning and model training or utilization).
Regarding claim 16, Glesinger as modified by Crabtree, Tran and Chen teaches wherein: the input information further comprises natural language input information comprising a textual description; using the model to process the input information to generate the inference information further comprises: processing the natural language input information, using the model, to generate an encoded representation of the natural language information; the similarity measure is generated based on a similarity among the encoded representations of the first neural network architecture information data sample, the second neural network architecture information data sample, and the natural language information; and the inference information indicates whether the first neural network architecture and the second neural network architecture are semantically similar in relation to the textual description ([0091], [0338], [0436] wherein Crabtree uses indices for linking vectorized data element representations to ontology elements are created and iteratively refined using contextual information from comparisons between ontological data from knowledge graphs containing facts, entities, and relations using at least vector similarity comparison as part of a comparative objective function for relevance. This iterative refinement process allows the system to continuously learn and improve the accuracy and relevance of its links between vector semantic representations and ontological representations of data and to add to and curate multiple structured and even symbolic representations of data elements into effective knowledge corpora for specialized and broad-based search, reasoning and model training or utilization).
Regarding claim 17, Glesinger as modified by Crabtree, Tran and Chen teaches wherein: the input information comprises neural network architecture information corresponding to a first neural network architecture; and the inference information comprises neural network architecture information corresponding to a neural network architecture semantically similar to the first neural network architecture ([0168] wherein Glesinger transforms relationships between objects to numerical values wherein semantic chains are converted to numeric-based representations by a vector embedding process such as a process that applies trained neural networks (e.g., Large Language Models or LLMs) and OTAVs are then generated by comparing the resulting vectors to the vectors generated by a similar vector embedding process applied to topics, the comparison being performed by the application of vector similarity evaluation methods such as cosine similarity).
Regarding claim 18, Glesinger as modified by Crabtree, Tran and Chen teaches wherein: the input information comprises: neural network architecture input information corresponding to a first neural network architecture; and natural language architecture input information comprising a textual description; and the inference information comprises neural network architecture information corresponding to a neural network architecture semantically similar to the first neural network architecture in relation to the textual description ([0168] wherein Glesinger transforms relationships between objects to numerical values wherein semantic chains are converted to numeric-based representations by a vector embedding process such as a process that applies trained neural networks (e.g., Large Language Models or LLMs) and OTAVs are then generated by comparing the resulting vectors to the vectors generated by a similar vector embedding process applied to topics, the comparison being performed by the application of vector similarity evaluation methods such as cosine similarity).
Claim 19 is similar in scope to claim 1 therefore the claim is rejected under similar rationale.
Claim 20 is similar in scope to claim 1 therefore the claim is rejected under similar rationale.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HASSAN MRABI whose telephone number is (571)272-8875. The examiner can normally be reached on Monday-Friday, 7:30am-5pm. Alt, Friday, EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HASSAN MRABI/Examiner, Art Unit 2144