DETAILED ACTION
This action is responsive to claims filed 07/14/2023.
Claims 1–7 are pending for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 07/14/2023, 09/06/2024, and 10/31/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered and attached by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
“a first module […]” in claim 1.
“a second module […]” in claim 1.
“a third module […]” in claim 2.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim 1–6 is rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Regarding claim 1, the claim recites, “a first module configured to: at least one of receive at least one input signal and generate at least one input signal, the input signal being associable with at least one of: at least one operating parameter and at least one target variable, associable with a structure”, however, it is unclear as to exactly what is required due to lack of specificity in the claim language (i.e., more specific language, punctuation, etc.). In the interest of compact prosecution, examiner is construing the limitation as “a first module configured to: or generate at least one input signal, the input signal being associable with at least one or wherein the operating parameter and the target variable are associable with a structure, respectively”.
Claims 2–6 are rejected because they depend from a claim rejected under § 112(b) and necessarily include all the limitations of the rejected claim.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1–7 are rejected under 35 U.S.C. 103 as being unpatentable over Yu et al., (US-20180253647-A1), hereinafter “Yu”, in view of Rahul Swaminathan (US 20170262773 A1), hereinafter “Swaminathan”.
Regarding claim 1, Yu teaches:
a first module configured to (Yu ¶0028, ¶0040: “Perception and planning system 110 includes the necessary hardware (e.g., processor(s), memory, storage) and software (e.g., operating system, planning and routing programs) to receive information from sensor system 115, control system 111, wireless communication system 112, and/or user interface system 113, process the received information, plan a route or path from a starting point to a destination point, and then drive vehicle 101 based on the planning and control information. Alternatively, perception and planning system 110 may be integrated with vehicle control system 111” and “Some or all of modules 301-305 may be implemented in software, hardware, or a combination thereof. For example, these modules may be installed in persistent storage device 352, loaded into memory 351, and executed by one or more processors (not shown). Note that some or all of these modules may be communicatively coupled to or integrated with some or all modules of vehicle control system 111 of FIG. 2. Some of modules 301-305 may be integrated together as an integrated module”—[(emphasis added) wherein the BRI of a first module is any hardware, software, code, instruction set, or combination thereof capable of performing the function, and wherein the first module includes necessary hardware like a processor memory and storage along with software]):
at least one of receive at least one input signal and generate at least one input signal, (Yu ¶0028: “Perception and planning system 110 includes the necessary hardware (e.g., processor(s), memory, storage) and software (e.g., operating system, planning and routing programs) to receive information from sensor system 115, control system 111, wireless communication system 112, and/or user interface system 113, process the received information, plan a route or path from a starting point to a destination point, and then drive vehicle 101 based on the planning and control information. Alternatively, perception and planning system 110 may be integrated with vehicle control system 111”—[(emphasis added) wherein the system receives input signals from the sensor]):
the input signal being associable with at least one of, at least one operating parameter and at least one target variable, associable with a structure (Yu ¶0028: “Perception and planning system 110 includes the necessary hardware (e.g., processor(s), memory, storage) and software (e.g., operating system, planning and routing programs) to receive information from sensor system 115, control system 111, wireless communication system 112, and/or user interface system 113, process the received information, plan a route or path from a starting point to a destination point, and then drive vehicle 101 based on the planning and control information. Alternatively, perception and planning system 110 may be integrated with vehicle control system 111”—[(emphasis added) wherein the input signal is associable with a plan path or route (i.e., an operating parameter) associable with a destination point (i.e., target) and a vehicle (i.e., structure)]),
a second module coupled to the first module, the second module configured to process the input signal (Yu ¶0028, ¶0040: “Perception and planning system 110 includes the necessary hardware (e.g., processor(s), memory, storage) and software (e.g., operating system, planning and routing programs) to receive information from sensor system 115, control system 111, wireless communication system 112, and/or user interface system 113, process the received information, plan a route or path from a starting point to a destination point, and then drive vehicle 101 based on the planning and control information. Alternatively, perception and planning system 110 may be integrated with vehicle control system 111” and “Some or all of modules 301-305 may be implemented in software, hardware, or a combination thereof. For example, these modules may be installed in persistent storage device 352, loaded into memory 351, and executed by one or more processors (not shown). Note that some or all of these modules may be communicatively coupled to or integrated with some or all modules of vehicle control system 111 of FIG. 2. Some of modules 301-305 may be integrated together as an integrated module”—[wherein the BRI of a second module is any hardware, software, code, instruction set, or combination thereof capable of performing the function, and wherein the second module includes necessary hardware like a processor memory and storage along with software]]),
to produce at least one output signal communicable for further processing (Yu ¶0022: “In one embodiment, autonomous vehicle 101 includes, but is not limited to, perception and planning system 110, vehicle control system 111, wireless communication system 112, user interface system 113, and sensor system 115. Autonomous vehicle 101 may further include certain common components included in ordinary vehicles, such as, an engine, wheels, steering wheel, transmission, etc., which may be controlled by vehicle control system 111 and/or perception and planning system 110 using a variety of communication signals and/or commands, such as, for example, acceleration signals or commands, deceleration signals or commands, steering signals or commands, braking signals or commands, etc.”—[(emphasis added)]).
Yu does not appear to explicitly teach:
by manner of at least one of: data cleaning, data wrangling, and data merging.
However, Swaminathan teaches:
by manner of at least one of: data cleaning (Swaminathan ¶0109: “Before the modelling starts, the input data is transformed to perform object tracking for modelling, resulting in the creation of two new columns as can be seen in the below table: [0110] 1) All Transactions, are cleaned in order to remove any empty or outlier data entries”—[(emphasis added)]),
data wrangling (Swaminathan ¶0194: “R is a scientific scripting language intentionally designed for statistics. Therefore, its language natively contains constructs helpful for statistical analysis. Also, it allows an interactive analysis of data without the need of continuous recompiling of code. It is suitable for quick and dirty ‘data wrangling’, testing of ideas and offers an immense amount of different statistical packages”—[(emphasis added)]), and
data merging (Swaminathan ¶0204: “These two files are merged to get the necessary data structure for the algorithm”—[(emphasis added)]).
The methods of Yu, the teachings of Swaminathan, and the instant application are analogous art because they pertain to training machine learning models to make predictions.
It would be obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the methods of Yu with the teachings of Swaminathan to provide specific types of control methods. One would be motivated to do so to remove any empty or outlier data entries, test ideas, obtain the necessary data structure (Swaminathan ¶0109, ¶0194, ¶0204).
Regarding claim 2, Yu in view of Swaminathan teaches all the limitations of claim 1.
Yu teaches:
a third module coupled to the second module, the output signal being communicable to the third module from the second module for further transmission from the apparatus to at least one device coupled to the apparatus (Yu ¶0028, ¶0040, ¶0054: “Perception and planning system 110 includes the necessary hardware (e.g., processor(s), memory, storage) and software (e.g., operating system, planning and routing programs) to receive information from sensor system 115, control system 111, wireless communication system 112, and/or user interface system 113, process the received information, plan a route or path from a starting point to a destination point, and then drive vehicle 101 based on the planning and control information. Alternatively, perception and planning system 110 may be integrated with vehicle control system 111” and “Some or all of modules 301-305 may be implemented in software, hardware, or a combination thereof. For example, these modules may be installed in persistent storage device 352, loaded into memory 351, and executed by one or more processors (not shown). Note that some or all of these modules may be communicatively coupled to or integrated with some or all modules of vehicle control system 111 of FIG. 2. Some of modules 301-305 may be integrated together as an integrated module” and “For each group of the second groups, the system calculates a third scale vector and a third shift vector based on the second batch-norm layer, and generates a second deconvolutional layer based on the calculated vectors and the first deconvolutional layer, such that the second CNN model includes the second deconvolutional layer corresponding to the second group. For example, for group 420, the system calculates a third scale vector and a third shift vector based on batch-norm layer 416 and generates accelerated deconvolutional layer 428.”—[(emphasis added) wherein the BRI of a first module is any hardware, software, code, instruction set, or combination thereof capable of performing the function, and wherein the third module includes necessary hardware like a processor memory and storage along with software]),
wherein the output signal is receivable by the device for machine-learning based processing (Yu ¶¶0054–0055: “The system generates new CNN model 422 based on accelerated convolutional layer 428, accelerated convolutional layer 428 corresponding to group 420. In another embodiment, for each of the second groups, the system calculates a fourth scale vector and a fourth shift vector based on a second scaling layer of the corresponding second groups; and generates the second deconvolutional layer based on the calculated vectors, and the first deconvolutional layer. For example, for group 420 (group 420 has a deconvolutional layer), the system calculates a fourth scale vector and a fourth shift vector based on batch-norm layer 416 and scaling layer 418. The system then generates accelerated convolutional layer 428 based on the deconvolutional layer 414 and the fourth scale vector and the fourth shift vector. In one embodiment, the first groups of layers are extracted from a first CNN model after the first CNN model is trained with training data. In one embodiment, the first convolutional layer and the first batch-norm layer are consecutive layers. The accelerated neural network models 124 may then be uploaded onto the ADVs, which can be utilized in real-time for object classification. In one embodiment, the object to be classified is an image having a green, yellow, and red traffic light. In another embodiment, the first probability event is a probability event that the object to be classified is a green light, a yellow light, or a red light.”).
Regarding claim 3, Yu in view of Swaminathan teaches all the limitations of claim 1.
Yu teaches:
wherein the at least one output signal is communicable for further processing by manner of machine-learning (ML) based processing to generate at least one ML model (Yu Fig. 5, ¶0057: “FIG. 5 illustrates an example of a neural network models generator (such as neural network models generator 123) of an autonomous vehicle according to one embodiment of the invention. For example, neural network models generator 123 can include extracting module 123A, vector calculating module 123B, layer generating module 123C, and neural network generating module 123D. When a batch normalization transformation is applied to a deep neural network model by combining or adding batch-norm layers and/or scaling layers to inner or intermediate activation layers of the original deep neural network to improve training flexibilities, the trained deep neural network model performing inference tasks retains many of the batch-norm and/or scaling layers of the neural network model. A trained batch-normalized deep neural network may be accelerated by combining activation layers with batch-norm layers by, for example, neural network models generator 123 of server 103”).
Regarding claim 4, Yu in view of Swaminathan teaches all the limitations of claim 3.
Yu teaches:
wherein the ML based processing includes at least one of: a data normalization stage (Yu ¶0058: “In another example, extracting module 123A may extract a mean and a standard deviation value associated with the batch-norm layer in the form of”—[wherein a standard deviation value is an output of a normalization stage]),
a data splitting stage (Yu ¶0036: “Training a CNN is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. “Training” a CNN involves iteratively applying inputs to an input layer of the CNN and comparing desired outputs with actual outputs at the output layer of the CNN to calculate error terms. These error terms are used to adjust weights and biases in the hidden layers of the CNN so that the next time around the output values will be closer to the “correct” values. The distribution of inputs of each layers slows down the training, i.e., a lower training rate is required for convergence, and requires a careful parameter initialization, i.e., setting initial weights and biases of activations of the inner layers to specific ranges for convergence. “Convergence” refers to when the error terms reach a minimal value. Training a CNN in mini-batches achieves a better performance”—[(emphasis added) wherein the training is done in batches that are first split]), and
a model training stage (Yu ¶0032: “Based on the training data collected by data collector 121, machine learning engine 122 may train a set of neural network models 124 for object detection and object classification purposes”).
Regarding claim 5, Yu in view of Swaminathan teaches all the limitations of claim 4.
Swaminathan teaches:
wherein during the data splitting stage (Swaminathan ¶0129–0132: “As discussed in above, sequences were split into Sessions to make movements clusterable. One of the motivations is that shorter movement patterns lend themselves better to clustering because patterns might become visible … FIG. 6 shows an exemplary sketch of clustering the sessions of respective state transitions clusters into a plurality of session clusters using the word2vec tool according to an embodiment of the present invention”),
a training dataset and a testing dataset are obtained (Swaminathan ¶¶0201–0202: “The present invention is applied and tested on two data sets”).
The same motivation that was utilized for combining Yu with Swaminathan, as set forth in claim 1, is equally applicable to claim 4.
Regarding claim 6, Yu in view of Swaminathan teaches all the limitations of claim 1.
Swaminathan teaches:
wherein the structure corresponds to a pipeline (Swaminathan ¶0215: “Hence, it is preferred to first use an interactive and rich tool like R, for the verification and testing of new ideas. Once a part of the pipeline is verified and fully understood, it is preferably realized in SCALA or Python on a parallel platform like Spark”).
The same motivation that was utilized for combining Yu with Swaminathan, as set forth in claim 1, is equally applicable to claim 6.
Regarding claim 7, Yu teaches:
A processing method suitable for generating at least one output signal communicable for machine-learning (ML) based processing to derive at least one ML model, the method comprising (Yu ¶0072: “FIG. 9 is a flow diagram illustrating a method to generate a new CNN model from an original CNN model according to one embodiment of the invention. Process 900 may be performed by processing logic which may include software, hardware, or a combination thereof. For example, process 900 may be performed by a data analytics system such as data analytics system 103 (e.g., offline). The new CNN model can then be utilized by an ADV to classify an object at real-time. Referring to FIG. 9, at block 902, processing logic extracts a first groups of layers from a first convolutional neural network (CNN) model, each first group having a first convolutional layer and a first batch-norm layer. At block 904, for each of the first groups, processing logic calculates a first scale vector and a first shift vector based on the first batch-norm layer. At block 906, processing logic generates a second convolutional layer representing the corresponding group based on the first convolutional layer and the first scale and the first shift vector. At block 908, processing logic generates a second CNN model based on the second convolutional layer corresponding to the plurality of the groups. The second CNN model is utilized subsequently to classify an object perceived by an autonomous driving vehicle”):
at least one of generating and receiving at least one input signal, the input signal being associable with at least one of (Yu ¶0028: “Perception and planning system 110 includes the necessary hardware (e.g., processor(s), memory, storage) and software (e.g., operating system, planning and routing programs) to receive information from sensor system 115, control system 111, wireless communication system 112, and/or user interface system 113, process the received information, plan a route or path from a starting point to a destination point, and then drive vehicle 101 based on the planning and control information. Alternatively, perception and planning system 110 may be integrated with vehicle control system 111”—[(emphasis added) wherein the system receives input signals from the sensor]):
at least one operating parameter, at least one target variable, associable with a structure (Yu ¶0028: “Perception and planning system 110 includes the necessary hardware (e.g., processor(s), memory, storage) and software (e.g., operating system, planning and routing programs) to receive information from sensor system 115, control system 111, wireless communication system 112, and/or user interface system 113, process the received information, plan a route or path from a starting point to a destination point, and then drive vehicle 101 based on the planning and control information. Alternatively, perception and planning system 110 may be integrated with vehicle control system 111”—[(emphasis added) wherein the input signal is associable with a plan path or route (i.e., an operating parameter) associable with a destination point (i.e., target) and a vehicle (i.e., structure)]);
to produce at least one output signal (Yu ¶0022: “In one embodiment, autonomous vehicle 101 includes, but is not limited to, perception and planning system 110, vehicle control system 111, wireless communication system 112, user interface system 113, and sensor system 115. Autonomous vehicle 101 may further include certain common components included in ordinary vehicles, such as, an engine, wheels, steering wheel, transmission, etc., which may be controlled by vehicle control system 111 and/or perception and planning system 110 using a variety of communication signals and/or commands, such as, for example, acceleration signals or commands, deceleration signals or commands, steering signals or commands, braking signals or commands, etc.”—[(emphasis added)]); and
communicating the at least one output signal for machine-learning based processing to derive at least one ML model (Yu Fig. 5, ¶0057: “FIG. 5 illustrates an example of a neural network models generator (such as neural network models generator 123) of an autonomous vehicle according to one embodiment of the invention. For example, neural network models generator 123 can include extracting module 123A, vector calculating module 123B, layer generating module 123C, and neural network generating module 123D. When a batch normalization transformation is applied to a deep neural network model by combining or adding batch-norm layers and/or scaling layers to inner or intermediate activation layers of the original deep neural network to improve training flexibilities, the trained deep neural network model performing inference tasks retains many of the batch-norm and/or scaling layers of the neural network model. A trained batch-normalized deep neural network may be accelerated by combining activation layers with batch-norm layers by, for example, neural network models generator 123 of server 103”)
Yu does not appear to explicitly teach:
processing the at least one input signal by manner of at least one of:
data cleaning;
data wrangling; and
data merging.
However, Swaminathan teaches:
processing the at least one input signal by manner of at least one of: data cleaning (Swaminathan ¶0109: “Before the modelling starts, the input data is transformed to perform object tracking for modelling, resulting in the creation of two new columns as can be seen in the below table: [0110] 1) All Transactions, are cleaned in order to remove any empty or outlier data entries”—[(emphasis added)]),
data wrangling (Swaminathan ¶0194: “R is a scientific scripting language intentionally designed for statistics. Therefore, its language natively contains constructs helpful for statistical analysis. Also, it allows an interactive analysis of data without the need of continuous recompiling of code. It is suitable for quick and dirty ‘data wrangling’, testing of ideas and offers an immense amount of different statistical packages”—[(emphasis added)]), and
data merging (Swaminathan ¶0204: “These two files are merged to get the necessary data structure for the algorithm”—[(emphasis added)]).
The methods of Yu, the teachings of Swaminathan, and the instant application are analogous art because they pertain to training machine learning models to make predictions.
It would be obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the methods of Yu with the teachings of Swaminathan to provide specific types of control methods. One would be motivated to do so to remove any empty or outlier data entries, test ideas, obtain the necessary data structure (Swaminathan ¶0109, ¶0194, ¶0204).
Prior Art of Record
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Puri et al., (“Intelligent visualization munging”) discloses data manipulation techniques including data wrangling (i.e., munging) using machine learning “According to examples, intelligent visualization munging may include transforming and enriching data that is to be visualized, determining features of the transformed and enriched data, determining a user role of a user associated with the transformed and enriched data, and a user interaction of the user. Intelligent visualization munging may further include learning a behavior of the user, and analyzing the features, the user role, the user interaction, and a learned behavior model to generate a recommendation that includes a predetermined number of visualizations from a plurality of available visualizations to display the transformed and enriched data. The predetermined number of visualizations is less than the plurality of available visualizations.” Puri, Abstract.
Griffith et al., (“Data ingestion to generate layered dataset interrelations to form a system of networked collaborative datasets”) discloses machine learning methods to generate training data for prediction models “Various embodiments relate generally to data science and data analysis, and computer software and systems to provide an interface between repositories of disparate datasets and computing machine-based entities that seek access to the datasets, and, more specifically, to a computing and data storage platform that facilitates consolidation of one or more datasets, whereby data ingestion is performed to form data representing layered data files and data arrangements to facilitate, for example, interrelations among a system of networked collaborative datasets. In some examples, a method may include forming a first layer data file and a second layer data file, assigning addressable identifiers to uniquely identify units of data and data units to facilitate the linking of data, and implementing selectively one or more of a unit of data and a data unit as a function of a context of a data access request for a collaborative dataset.” Griffith, Abstract.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS SHINE whose telephone number is (571)272-2512. The examiner can normally be reached M-F, 11a-7p ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached on (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.B.S./Examiner, Art Unit 2126
/DAVID YI/Supervisory Patent Examiner, Art Unit 2126