DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 05/30/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Preliminary Amendment
The preliminary amendment filed on 05/30/2023 have been acknowledged
Claims 1-10 have been currently amended
Claims 1-10 are pending
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “data collection processing module” , “electroencephalogram sensitivity extraction module “, “dominant color feature extraction module”, “environment dominant color measurement model training module”, “feature importance identification module”, quality quantitative evaluation module in claim 7.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites “constructing a built environment dominant color measurement model and training same…” and “inputting an environment image…into a trained model”. It is unclear whether the “trained model” in the latter limitation refers to: The same “dominant color measurement model” previously constructed and trained or a separate “prediction model” as depicted in FIG. 4 of the specification.
FIG 4 clearly distinguishes between a “training a model” stage within an XGBoost framework and a downstream “prediction model” block that produces an “environment sensitivity quantitative result” . Therefore, the specification supports a two-model architecture (training model and prediction model) whereas claim 1 does not specify whether the recited “trained model” is the same model or a distinct prediction model derived therefrom. The two interpretations affect the claim scope into either:
A: A single model interpretation where the same model must both be trained using dominant color features and sensitivity data and directly accept environment images as inference input.
OR
B: A two-model interpretation where a first model may generate parameters used by the second prediction model that receives environment images.
Because the claim language does not clarify which architecture is claimed, a person of ordinary skill in the art would not be reasonably apprised of the metes and bounds of the claim. Accordingly claim 1 is indefinite.
Additionally claim 1 is further indefinite because it is unclear how the “trained model” which is trained using “sensitivity data and dominant color feature as an input” processes the recited “environment image” during prediction. The claim does not recite extracting dominant color feature parameter from the environment image prior to input into the trained model nor does it recite that the trained model is configured to receive raw image data. FIG 4os the specification depicts feature selection and normalization prior to model training and a distinct prediction stage. Therefore, it is unclear whether the trained model operates on feature vectors or raw images. These alternative interpretations affect claim scope.
A person of ordinary skill in the art can justly and immediately ask: “How can a model trained on feature vectors (dominant color feature parameters) accept a raw image as input unless the image is first converted into the same feature representation” causing a second ambiguity for a person of ordinary skill in the art’s attempt at replication.
Furthermore, claim limitations in claim 7 including “ data collection processing module configured to acquire several built environment images and electroencephalogram data corresponding thereto, and convert same into several build environment sequence samples; an electroencephalogram sensitivity extraction module configured to extract an electroencephalogram sensitivity index from the electroencephalogram data, so as to obtain a built environment dominant color sensitivity value; a dominant color feature extraction module configured to identify and segment an image color from an image sample, so as to obtain an image color cluster and a dominant color feature parameter; an environment dominant color measurement model training module configured to construct a built environment dominant color measurement model, input sensitivity data and the dominant color feature parameter, and train the model through an XGBoost decision tree algorithm; a feature importance identification module configured to identify an important dominant color feature, and construct a comprehensive environment dominant color measurement system according to an environment dominant color feature selection table; and a quality quantitative evaluation module applied to a built environment measurement method and configured to evaluate an environment dominant color quality according to a dominant color feature weight.” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1 and 2 is are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et all (Zhang hereinafter "Emotional Responses to the Visual Patterns of Urban Streets: Evidence from Physiological and Subjective Indicators"). In view of Xing et al (Xing hereinafter, “Exploiting EEG signals and audiovisual feature fusion for video emotion recognition”) in further view of Ghebreab et al (Ghebreab hereinafter “A Biologically Plausible Model for Rapid Natural Image Identification”)
Zhang teaches a measurement method based on image electroencephalogram sensitivity data for a built environment dominant comprising: acquiring electroencephalogram data corresponding to a built environment image sample (Abstract “ we recruited 26 participants and scrutinized their emotional response to various urban street scenes through an immersive exposure experiment using virtual reality.” And Section 2.3 Measurements of the participants Environmental perception “EEG data were obtained from the EMOTIV EPOC+,”) calculating an environment dominant color sensitivity on the basis of EEG data ( Section 2.3. Measurements of the Participants’ Environmental Perception: “EMOTIV EPOC+ can further generate emotional indicators in real-time based on brain signals…” this shows EEG derived sensitivity computed from EEG. The computed metrics are the stimulus response measurements “Interest measures “the degree of attraction or aversion of current stimuli”, with high scores “indicate a strong affinity”, and low scores “indicate a strong aversion to the task” The statistics that were found for color percentages show a calculated dominancy of color sensitivity. “Among the color characteristics, the average proportions of green-class, blue-class, and red-class colors were 5.30% (±4.74%), 15.37% (±9.67%), and 5.75% (±3.75%), respectively. Their proportions were relatively low and fluctuate greatly, indicating that most of the people’s sight in urban streets is still occupied by low-saturation colors like black, white, and gray.” Zhang shows the computed indicators such as interest and excitement (Table 3) are calculated from EEG and represents response sensitivity from the VR scene which includes the scenes color composition.) extracting a dominant color feature parameter according to the built environment image sample ( Zhang teaches extracting quantified color composition features from images which constitutes “dominant color” parameters: Section 2.2.2. Color Classification of the Images “The color classification in this study is based on the HSV color model” and “we categorized the color of each pixel in the street scene images into six classes: red, green, blue, gray, white, and black.” Pixel color classification and resulting percentages of color previously presented show dominant color feature parameters.
Zhang does not explicitly teach a dominant color measurement model trained on sensitivity data and dominant color feature as an input nor inputting an environment image into the trained model to predict a dominant color sensitivity result.
Xing teaches constructing a model trained on EEG measurements (sensitivity data) and color feature ( Xing states in Section 2.1 Emotion Recognition Based on Video Features : “we will combine the audio-visual features of video with the EEG features of participants to train the classifier.” Table 7 lets us know what the visual features consist of with a main component being color energy. Color energy is a computer vision metric that quantifies the emotional intensity of colors in a scene Xing states that the dataset is “a fusion dataset of EEG and video features” and that subsequently “machine-learning algorithms, including… XGBoost, …were applied to obtain the optimal model for video emotion recognition based on a multi-modal dataset.” This reads on the claim limitations because the ML algorithms are trained/evaluated based on the fused multi-modal dataset. This is also supported with figure 1 and figure 7.
Xing nor Zhang input an environment image into a trained model so as to obtain a predicted “dominant color” sensitivity result
Ghebreab teaches inputting an environment image into trained model (“In each experiment, we constructed a neural response model from the Weibull statistics of the presented images and corresponding EEG data, which we then applied to predict EEG responses to a new collection of natural or artificial images.” and that “Given a set of M new images and their Weibull parameters Y, the Weibull response model provides the EEG prediction ˆgc (t).” The EEG responses to these new images are predicted using the Weibull response model and its derivatives can be seen in equations 2-5. ) as to predict a dominant color sensitivity result (Ghebreab states in their results they “present correlations between ERP signals from across the entire brain and the two parameters of the Weibull t to the sum of selected local contrast values in the gray-level, blue-yellow and red-green components of each image. Correlations are strikingly high at electrode Iz overlying the early visual cortex. The peak r2 (square of the correlation coefficient) over time for that electrode is 75 percent” This shows that Ghebreab provides an explicit flow where the predicted output of the model is an EEG response function for each channel and that the predictor extraction explicitly depends on color components of blue-yellow and red-green. The chromatic channel predictor construction allows the model to output EEG signals (and their sensitivity which can be seen through the amplitude of their peaks) that are necessarily rooted in color.
A person of ordinary skill in the art at the time this invention was effectively filed would have combined Zhang’s built environment color feature framework with EEG based outcomes, Xing’s explicit ML training and model selection pipeline using EEG and visual features, and Ghebreab’s explicit image to predicted EEG response model because together they yield a scalable two stage architecture that matches the claimed invention. Xing teaches how to construct and optimize a trained model from EEG and visual feature data (color energy) while Ghebreab teaches how a trained response model can then be applied to a new environment image to produce EEG predictions which are grounded in color processing components. Once training and estimation is resolved, the system can generate EEG based sensitivity estimates for additional built environment images without repeating EEG collection for every candidate image. This is done while still using the same kind of color feature representation Zhang relies on for built environments.
As per claim 2
Zhang, Xing and Ghebreab cover all claim limitations previously rejected in claim 1’s 103 rejection. See claim 1’s 103 rejection.
Zhang teaches collecting EEG data of J subjects (2.3. Measurements of the Participants’ Environmental Perception “We measured the participants’ emotional response…EEG data were obtained”) on built environment images (Abstract “models to examine people’s emotional response to the physical element configuration and color composition of street scenes”. Introduction: “ of the most common forms of the built environment, urban streets gather a large number of human activities, and their functional diversity, façade forms, and scale characteristics “ Study design “We conducted a VR-based experiment in which participants were exposed to urban street scenes, which were collected on-site in Beijing using a dual fisheye panoramic camera (RICOH THETA V, Ricoh Company Ltd., Tokyo, Japan) on a clear day. ) under the same laboratory environment (Zhang does a controlled experimental setup which necessarily includes the same laboratory environment for participant sessions Study design “ There were 39 selected urban street scenes in total, which were randomly assigned to four groups, with 12 scenes in each group and some of them repeated in different groups. All participants were randomly assigned to explore one of the four groups during the experiment. The physiological indicators of the participants, including EEG (Emotiv EPOC+, EMOTIV Inc., San Francisco, CA, USA), EDA, and HR (E4 wristband, Empatica Inc., Cambridge, MA, USA), were obtained by bio-monitoring sensors in real-time, and the subjective evaluations were conducted through a question-and-answer interview.”) to obtain I*J electrocochleogram data groups (Zhang shows subject wise grouping; one data set per subject. Zhang records EEG from each participant and maintains data per participant for subsequent analysis), and a data group and size where d denotes a dominant color feature dimension of each data group (Zhang gives finite dimensional color feature from images . 2.2.2. Color Classification of the Images “The color classification of the street scene images is based on the HSV color model… we categorized the color of each pixel in the street scene images into six classes: red, green, blue, gray, white, and black.” To clarify, each image is split into six color classes. Those classes create a fixed feature vector of d=6. Zhang showcases a dominant color feature dimension “d” corresponding to 6 discrete color classes pulled from built environment images.) and n denoting the number of an EEG data sample collected at a time (Zhang shows sampled time series. The EEG data are “The metrics…included engagement excitement, stress, relaxation, interest and focus.” EEG already functionally sample response continually over time. EEG metrics are computed over multiple EEG signal samples not just one impulse. Therefore, for each subject and image pair Zhang collects multiple EEG samples per measurement interval. The acquisition itself necessarily comprises multiple EEG samples over time for each image stimulus.
Accordingly, a person of ordinary skill in the art at the time this invention was effectively filed would have found it obvious to continue fine tuning the Zhang/Xing/Ghebreab modified workflow with Zhang’s acquisition protocol (directed towards EEG data collected from multiple subjects under a controlled laboratory environment creating subject wise grouped EEG data with a fixed dimension color feature representation per image and repeated time sampling of EEG observations). Zhang’s protocol produces the structured dataset required for Xing’s explicit multimodal machine learning training on EEG signals and visual/color features and Ghebreab’s estimation of a predictive response model mapping which maps inputted scene images to predicted EEG outputs. This enables a stronger scalability versus a non-controlled data collection while maintaining a consistent linkage between color composition features and EEG response outcomes.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang et all (Emotional Responses to the Visual Patterns of Urban Streets: Evidence from Physiological and Subjective Indicators). In view of Xing et al (Xing hereinafter, “Exploiting EEG signals and audiovisual feature fusion for video emotion recognition”) in further view Bukowski et al (Bukowski hereinafter, US 20240013925 A1 )
Zhang teaches a data collection processing module used to acquire several bult environment images and EEG data corresponding thereto, and convert same into several build environment sequence samples( Zhang shows built environment street scenes and EEG collected in real time: 2.1 Study Design: “There were 39 selected urban street scenes …The physiological indicators of the participants, including EEG (Emotiv EPOC+, EMOTIV Inc., San Francisco, CA, USA), EDA, and HR (E4 wristband, Empatica Inc., Cambridge, MA, USA), were obtained by bio-monitoring sensors in real-time, and the subjective evaluations were conducted through a question-and-answer interview.” In regards to “conversion to sequence samples”, Zhang discloses repeated image stimulus trials and organized experimental syntax this can be seen as producing structured samples or datasets for downstream modeling.) An EEG sensitivity extraction module configured to extract an electroencephalogram sensitivity index from the electroencephalogram data, so as to obtain a built environment dominant color sensitivity value (Zhang shows EEG performance metrics computed and used as quantitative outputs per scene which supports index like EEG measures per stimulus: 2.3. Measurements of the Participants’ Environmental Perception: “EEG data were obtained from the EMOTIV EPOC+, a mobile EEG headset that has 14 channels to capture brain wavebands. The metrics calculated by the EMOTIV performance metrics algorithms for cognitive states were used in this study. They were output at 0.1 Hz and included the following six metrics: engagement (En), excitement (Ex), stress (St), relaxation (Re), interest (In), and focus (Fo)” Claim 7’s language does require a specific EEG index formula like previous claims. Extracting quantitative EEG measures per image stimulus meets the “EEG sensitivity index” module concept as an Index is simply a list that organize information. ) a dominant color feature extraction module configured to identify and segment an image color from an image sample, so as to obtain an image color cluster and a dominant color feature parameter ( Zhang squarely supports color feature extraction from images using HSV, pixel level categorization and segmentation. “The color classification in this study is based on the HSV color model,” “we categorized the color of each pixel in the street scene images into six classes…” in 2.2.2. Color Classification of the Images support this as well as figure 2 showing street scene semantic segmentation and color classification results. In regards to “color cluster”. “Color clusters” can be groupings of pixels of similar and or varying color class. Zhang’s pixel classification into HSV based categories and the retained regions correspond to this. )
Xing teaches an environment dominant color measurement model training module configured to construct a built environment dominant color measurement model which inputs sensitivity data and the dominant color feature parameter and train the model through an XGBoost decision tree algorithm ( Xing teaches constructing a model trained on EEG measurements and color feature. Xing states in Section 2.1 Emotion Recognition Based on Video Features : “we will combine the audio-visual features of video with the EEG features of participants to train the classifier.” Table 7 lets us know what the visual features consist of with a main component being color energy. Color energy is a computer vision metric that quantifies the emotional intensity of colors in a scene Xing states that the dataset is “a fusion dataset of EEG and video features” and that subsequently “machine-learning algorithms, including… XGBoost, …were applied to obtain the optimal model for video emotion recognition based on a multi-modal dataset.” This reads on the claim limitations because the ML algorithms are trained/evaluated based on the fused multi-modal dataset. This is also supported with figure 1 and figure 7. This also shows that Xing supplies the teaching of training a XGBoost model on EEG and visual features.
Zhang nor Xing support a feature importance identification module configured to identify an important feature that then constructs a comprehensive system according to a feature selection nor a quality quantitative evaluation module configured to evaluate feature quality according to a dominant feature weight.
Bukowski not only also shows XGBoost as a trained model, and selection training (Paragraph [0067] “For XGBoost, the feature order was determined using mean SHAP value.sup.18 impacts on model output” Bukowski also describes training classifiers and then using subsets and features in cross validation to assess performance and is supported in the whole of paragraph [0067]. ) but also a feature importance identification module that identifies an important feature according to a feature selection table (Paragraph [0067] “ feature importance for each parameter was computed in one of two ways depending on the classifier. For XGBoost, the feature order was determined using mean SHAP value… Subsets of the most important features were then used in ten-fold cross-validation to assess model performance with increased feature counts for each algorithm…” A feature selection table” can be met by a ranked or ordered feature list used to select subsets for modeling. Its functionally a table of selected features and or their ordering. Bukowski meets this criterion.) and a quantitative evaluation module used to evaluate quality according toa dominant color feature weight (Bukowski ties evaluation to a scoring metric and overall score (including weighted sum) and uses importance or impact values within. Paragraph [0069] “one or more scoring metrics can be used for validation…the overall score can be a weighted sum…” Paragraph [0080]-[0081] “ For the gradient-boosted trees method, we computed values to analyze feature importance based on the average impact on model output for the top 20 parameters presented. FIGS. 7A and 7B shows the 20 most important features from the XGBoost model as identified via computation of SHapley Additive exPlanations (SHAP) values.sup.18 for both the t.sub.early and t.sub.term scenarios” supports this. Under broadest reasonable interpretation using feature importance or the impact can constitute as weights. That concept and using quantitative scoring to evaluate and choose models , and therefore evaluate the quality of output measure, reads on the claimed “quality quantitative evaluation module”
Accordingly, a person of ordinary skill in the art would have been motivated to combine Zhang with Xing because Zhang supplies built environment image color features and EEG responses whilst Xing teaches the construction and train ML models using EEG sensitivity data and visual features (in regards to color sensitivity) and having that training explicitly include XGBoost to obtain an optimal predictive model. A person of ordinary skill in the art would further incorporate Bukowski for identification of important features and performing quantitative evaluation based on feature weights. Bukowski explicitly teaches computing importance values for input features and identifying the “most important features” as well as computing and outputting quantitative utility and overall utility rate whilst using feature importance in making determinations. The advantage of the combination is that it enables an end-to-end interpretable system where: Zhang provides reliable acquisition and extraction of EEG response metrics and structured color features for built environments, Xing provides a proven ML training approach that includes XGBoost over EEG sensitivity data and visual features(color) for downstream predictive modeling. Finally, Bukowski adds explainability and validity through feature importance ranking and decision grade quantitative scoring. This allows the system not only to train a model but also to identify which color features drive sensitivity outcomes and to output a single quality score for practical evaluation workflows.
Allowable Subject Matter
Claim 3-6, 8-10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANE WRENSFORD CODRINGTON whose telephone number is (571)272-8130. The examiner can normally be reached 8:00am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHANE WRENSFORD CODRINGTON/Examiner, Art Unit 2667
/TOM Y LU/Primary Examiner, Art Unit 2667