Power Insights podcast: Episode 2 - San Fernando Event.

Feb. 5, 2021, midnight by Dave Angell | Last modified Feb. 5, 2021, 9:09 a.m.







The following is a transcript of our Power Insights podcast, Episode 2 San Fernando Event.

Dave Angell: Well, good day to you. This is Dave Angell from the Northwest Power Pool and today, I'll be talking with some folks representing NERC and we're going to talk about the San Fernando Disturbance. I just want to let everyone know that I actually lived the first nine years of my life in San Fernando, so I am familiar with the area. I have with us today Rich Bauer, and Rich Bauer is with the event analysis team at NERC, and Ryan Quint and he's with the engineering team at NERC, and these guys are going to help us understand the disturbance, the key findings and the recommendation that came out of this disturbance. So with that, I am going to turn it over to Rich Bauer, who's going to describe the San Fernando disturbance. Thank you, Rich.

Rich Bauer: Thanks, Dave. We're glad to be here and visit with you about this today. So, like Dave said, the event we were going to talk about today is what we have phrased or named the San Fernando Disturbance. And the San Fernando Disturbance, it started out as a transmission line fault and that transmission line fault occurred in the San Fernando area, hence we called it the San Fernando Disturbance.

So the San Fernando Disturbance is number five in our series of transmission loss events that have triggered a solar PV loss, and so on July 7 of this year, we had a 230 KV line fault. And the 230 kV fault was actually interesting in this case. The fault was actually a simultaneous fault on two circuits. There was a common tower 230 kV double circuit and the static wire failed and when the static wire came down, it initiated a single line to ground fault on both of those circuits. And for that initial fault, the initial static wire failure, we lost approximately 200 megawatts of solar PV resource during that initial fault.

But then what happened after that is that both circuits relayed, both circuits cleared in under three cycles, normal clearing, what we would expect. And like I said, we lost about 200. Well, once those circuits cleared and locked out, then the system operators, they test energized each one of those circuits. Well, when they test energized the first circuit, the static wire was not engaged with the circuit anymore, and that circuit was actually energized and held. And so they put the first circuit back in service without any event happening, but when they test energized the second circuit, at that point in time, the static wire had actually become involved with all three phases of that circuit.

And so when they test energized for all intents and purposes, they closed into a bolted three phase fault. And that bolted three phase fault, once again, it cleared very quickly, protection cleared is in under three cycles again, but of course, being a three phase bolted type fault, we had a fairly significant three phase voltage dip, sag, depression, whatever you want to call it on the system. And during that test energize into that three phase bolted fault, we lost approximately 1,000 megawatts of solar PV generation resource during that time.

So that's really what the disturbance was, is once again, like I say, this is number five in our series for hose of you who are keeping up with this, this all started back in August of 2016 with the now infamous blue cut fire disturbance. So we've had blue cut, we've had Canyon Two fires disturbance, then we had the Angeles Forest and the Palmdale Roost Disturbances last April, April and May, I guess it was. And then this year, we have the San Fernando. 

So this is an ongoing saga for us in the industry, in that we continue to see these solar PV losses during external system faults. And so really, the reason that we look into it, the reason we publish these disturbance reports and really the reason we talk like with you today here, Dave, is to just try to disseminate our findings from analyzing these disturbances and see if we can get a little better at solar PV basically riding through these external system faults because as, once again, all of us are aware and knowledgeable of, is that unexpected loss of generation for faults that they really don't have to go off the system for, creates a bit of a reliability risk in the system. So we want to learn from these events and if we just can get a little better performance out of it. I'll probably pause there and see if Ryan wants to add anything. Did I miss something? Or if you had any questions, Dave.

Dave Angell: I think in terms of the generation loss of this event, being 200 plus 1,000, so let's say 1,200. That's like the loss of a single unit at a nuke plant. So that wouldn't necessarily rise to the level of an event analysis, if it tripped under normal conditions. Is that a fair statement?

Rich Bauer: Yes, yes. That would be a fair statement.

Dave Angell: Yeah. And so what makes this different is the unintended loss, right? The unexpected loss when it really shouldn't go out of service.

Rich Bauer: Yeah. Yeah, absolutely, Dave. And I think one of the other concerns, the reason that we really look into these also, is that ... so a single unit trip, there would more than likely be a single cause for that. And while we see in these solar PV losses, and specifically in regard to the San Fernando Disturbance, that thousand megawatt loss that we had, that was actually across 50 separate facilities. So we had 50 separate solar PV facilities, plants, installations, whatever monitor you want to put on them. We had 50 of those that reduced output.

Now, out of those 50 facilities, they reduced their output anywhere between one megawatt to 137 megawatts. So really, that widespread loss is the thing that I think concerns us as much as anything because there are multiple facilities that are reducing their output or dropping off in response to these disturbances. And so, if we didn't analyze that and talk about it and try to get better performance out of that, then how large could that get? Maybe if we had a fault in a different spot, maybe we could get a fault in the right spot that, instead of 1,000, maybe it might be 2,000. But I think that widespread, where we see multiple facilities exhibiting this less than desirable ride through type behavior, I think that's really what we want to talk about because we would like to try to contain that right and get better. Does that make sense?

Dave Angell: Certainly does. Thank you. And Ryan, you had some additional insight to offer?

Ryan Quint: Yeah. I was just going to stress pretty much what Rich just said there about the fact that really what we're seeing is more systemic. So while the megawatt side may only be 1,000, and I hate to say that, really the number should be zero, right? Because we had a normally cleared fault with no consequential loss of generation, and so that number should, ideally, be pretty close to zero in terms of reduction of output to a normally cleared two and a half cycle fault event. 

And so the fact that we're seeing many, many resources exhibiting some type of abnormal performance, some of which we've been aware of in the other disturbances, some of which was new findings for us in the event analysis space, that's what really catches our eye. 

And going all the way back to the Blue Cut Fire, is really why we do this, to identify potential systemic problems, provide guidance on how to mitigate those and we produce reliability guidelines on various levels in the NERC world to try to get out in front of that, provide guidance in the industry on how to mitigate some of those issues so this is another event in the ongoing saga.

Dave Angell: Well, thank you for that. I was wondering, before we get into the key findings and recommendations, you indicated that there were maybe some new findings in this particular event that weren't associated with the other four events?

Ryan Quint: Yeah, and I could talk about that for just a second because I think that's an important point that always comes up when we start these discussions, we hear people saying, "Well, I read the report and it looks like we just have the same problems that we had in all the other events," right? And that, "No one's fixing the problems." And I think an important point that needs to be made is that, there are some issues that are ongoing in each of the events, but each event has its own set of issues associated with it. So when we look at, say, the Blue Cut Fire, the predominant reason of tripping was the erroneous tripping for the way that the inverters were calculating frequency. That was really one particular inverter manufacturer that then went out and fixed up that problem and we have never seen that problem again on the bulk system. So, that was deemed the big success because we were able to eliminate that problem with the inverter manufacturer.

Then we had the Canyon Two Fire and we had transient AC over voltage as the primary form of tripping, along with some other forms of tripping that were fairly small. And so, that one really caught our attention as sub-cycle and the order of one millisecond over-voltage was leading to tripping of a bunch of inverters. 

And then in the Palmdale Roost Angeles Forest, we saw a bit of the same, but we started seeing things like phase lock, loop loss of synchronism, DC reverse current tripping, et cetera, et cetera. And then in this event, and we'll get into that, some of the key findings, we saw DC under-voltage, DC over-current, and again, those predominately attributed to one specific inverter manufacturer.

And so each one of these events has its own nature and form of the reduction of output. And in each of these events, we've seen the ongoing momentary cessation challenges, some of which can be at six, some of which are legacy inverters that have been art coded to perform that feature. And so, we have a good feel for those resources that we know are going to do ... they regularly show up in our disturbances and we expect it, but then we see newer facilities and when that number is encroaching up on the order of 50, like Rich said, it starts to get our attention. So I think that's an important point to make, that each event is a little bit different and we're learning new things as each of these events get analyzed and that, that highlights the value of performing this analysis in the first place.

Dave Angell: One thing I was very curious about, this particular event was ad hoc reporting. So is that, essentially, informal reporting? So it's not required?

Rich Bauer: Yeah, that is true. So with the reporting aspect, there's two paths that we have at NERC. So one path is with EOP4, NERC standard. I hate to bring up the NERC Standards and that stuff with Ryan and I because when Ryan and I speak with people, we always give the caveat, "We're not with compliance, we're not auditors, we don't work with standards and things like that." But when it comes to talking about the reporting, EOP4 just has to bubble up and be talked about.

So EOP4 is a standard and there are some reporting requirements for that. However, there are some levels of generation loss that are required to be reported under EOP4, however, typically that's a unit in a single area, that kind of stuff. And so, with this being really widespread, nothing meets the threshold for reporting based on that. So from an EOP4 perspective, it really doesn't meet the criteria for reporting.

The other avenue where we get reporting, and this one is not required reporting by any means, but it's the event analysis process. And so the event analysis process, the NERC event analysis process, is a voluntary process. It's voluntary if entities would like to participate in that process. We do actually have a very high percentage of entities that do participate and provide brief reports and provide data on these. And in fact, that did happen with this July 7 event. California Independent System Operator did participate, they did provide a brief report for this event.

I guess one point that I really want to make with that one, though, is that just very, very recently, we made a change to the categories for the event analysis process that really did allow that ... or, I shouldn't say allow it, but actually did create a category that this event fell under. And so, just at the beginning of this year, we created categories that were 500 megawatts of aggregated loss of an inverter based resource that was non-consequential loss. We refer to those, if you're familiar with the event analysis process, we have category one, two, three, four and five events. 

And in category one, we have a category one I and one J event, which basically are focused on the inverter-based resource losses, and we just created those events. So in the EA process side, the voluntary process side, we actually do have ... any of these disturbances that would result in greater than a 500 megawatt loss, then in the voluntary process, we do have a mechanism to get some reporting and some information on it. But from a required reporting side of things, we don't have that.

Dave Angell: If some entity ad hoc reports, the first thing that NERC would do then, would be to go through and determine if it meets one of these categories. So if it didn't meet a particular category, then NERC would not move forward in any sort of event analysis. Is that a fair statement?

Rich Bauer: Well, actually, maybe not. And the reason is, Dave, is because in a number of instances ... and in fact, if we go back and look, as we've said before, this is number five in our ongoing sage. The first four events, Blue Cut, Canyon Two, Angeles Forest and Palmdale Roost, at the time those events happened, this category in the event analysis process was not in effect. And so none of those met a criteria in the event analysis process, they didn't meet a qualified event criteria.

However, in each of those instances, we did move forward and we asked for information, we gathered data and we went ahead and produced disturbance reports on those events. So even though an event wouldn't meet the criteria of a currently defined categorized event, if that event is of interest enough that we think that it warrants some analysis, we go ahead and, once again, on a voluntary basis, we ask for information and try to do some analysis and produce a disturbance report. 

So I guess the real point to all of that conversation is just to say that, "Well, just because it doesn't meet the criteria for a qualified event in the voluntary EA process, doesn't mean that we wouldn't try to gather data and analyze it."

Dave Angell: Well, let's go ahead and walk our way down into the key findings and recommendations there. Let's start with data. So, I guess one of the key findings here is that there's poor solar PV data resolution. So let's take a look at that.

Ryan Quint: This was something that we've experienced in multiple disturbances and I think something that the industry is grappling with. When we do event analysis, we often need really granular data to really understand the root cause of the abnormal, in most cases, performance that we're seeing, and we've been talking a lot about that internally, about how data leads to information and information leads to engineering decisions being made and vice versa, it goes the other way. And so we know what we're after, and to really understand what we're after in terms of why the resource tripped, why the resource reduced its power, why the resource took a couple minutes to return back to pretty-disturbance value, we need some fairly extensive data to understand why the research behaves the way it did. 

And in many cases, when we go out and ask for high speed data at the highest resolution available, we get back data resolutions on the order of one minute, or even in many cases, five minute data resolution, and that goes so far even that we've had facilities where we've showed up and said, "Hey, we noticed that you reduced output and you returned to pre-disturbance values in about three, four minutes and we would like to better understand what happened here," and the facility responds and says, "We didn't do that. We sat at full output the whole time because we have this data point on one side and this data point on the other side that shows we're at full output, and so we're not aware of any abnormal behavior that we did." 

So we then take the transmission service provider, the iso data or whatever facility or owner's providing that data on the transmission side, which is typically two second, four second, one second type data, and you can see a beautiful reduction and recovery of the plant itself.

And so there's not a common understanding about what high speed data really means. And even with that, say one second data, we definitely can get an understanding of what happened and we can use our engineering judgment and experience with analyzing these other events and we can pinpoint, typically, just looking at the shape of a curve, what happened in that facility, and in some cases, we can even say why that happened. 

But really when it comes down to, "Okay, you lost some inverters, you tripped, why did you do that?" The data's not readily available from the facility owner. We really need point on wave, VFR type data, inverter level oscillography data and then sequence of event recording data from the individual inverters that may have tripped or entered into [ride through?] mode or something like that. Often, we need that time stamped to at least a millisecond, and in some events, including this one, you have timestamps that fall right on top of each other with poor data resolution and so you can't really tell what caused what, you just know certain things happened.

But really, it goes back to, a lot of the facilities reported data at the highest resolution they had on the order of one to five minutes, which really doesn't help from an events analysis standpoint. And we have to use engineering judgment to really look at, "Well, what happened?" And like I mentioned, if we know that the facility returned back to pre-disturbance output, say, in a couple minutes, it's not likely that an inverter tripped, but there's some type of abnormal behavior going on in terms of interactions with the plant controller or the inverter settings or things like that and we can work our way backwards to try to figure out, "What can we really say about what happened here?"

So our recommendation there is just to really have better, more adequate data monitoring for inverter based resources to determine the root causes of this abnormal performance. Like I mentioned, this is at the inverter level, the plant level, in fault code data, oscillography record, VFR record and having high resolution SCADA data archived in the facility. That data that we've heard from multiple GOs, it's often reported at one to two second resolution, but then it gets stored in their historian at a five minute data resolution after a period of time, and so that data got nothing stored. With today's technology, we should be able to store for some degree of time, some length of time, high speed, even SCADA data.

We also have a recommendation on this one regarding ... well, it goes back to some of our other reliability guidelines, where we really have stressed that transmission owners, per the fact one standard, should establish and improve their data recording requirements to include all the things that we just talked about for, really, all BPS generating resources, but in particular, what we're really focusing on here is inverter based resources to make sure we can understand these types of performance.

In the report, we also did mention that FERC may consider adding this capability to the pro-form LGIA and SGIA as well, as needed. And really, we point back to the recommended practices that were put out by our NERC inverter based resource performance working group, we have a reliability guideline that covers this in much more detail, and all of what I just talked about is really stressed in that guideline.

Dave Angell: One thing I'm noticing, though, is there's not a recommendation of synchronization. Is that something that we'd be looking at in the future or if you can get this data that you're looking for, you can synchronize just based on the information that you're provided?

Ryan Quint: Rich and I have both done this, particularly for inverter based resources, and then with the other types of events where synchronization type things have really been at the forefront. I think back to when I first started at NERC and we analyzed the Washington, D.C. disturbance and we had four timestamps, even within a single facility and we didn't necessarily know what unit tripped the other unit and how that all worked, and time synchronization was really critical there.

Again, going back to the reliability guideline that I mentioned, that one does recommend time synchronization to a common time reference, such as maybe GPS with a GPS clock, for example. We're able to discern a lot of information with the solar plants because a plant with a specific behavior, you know when that fault happened, it's clear in the data that we do have, that there was a reduction, and it comes down to the data resolution is what's really challenging us. And yes, time synchronization is important, but fortunately, we're not having some solar inverters then causing, say, abnormal behavior of other transmission components or, say, a RAS or something. In that case, time synchronization would be of utmost importance and so that's why we do recommend everything be timestamped.

Dave Angell: Now, moving onto another key finding, continued and improved analysis needed. So, explain a little bit more about that.

Rich Bauer: One of the things that we really recognized, Dave, is that when these disturbances happen and we identify facilities that their output reduced during the disturbance, so we request data from the facility as to what the cause of their reduction was, did they experience inverter tripping? What delayed their return to pre-disturbance output levels, things like that. In every one of these disturbances, after we receive the data, we've always had a followup call with that entity and discussed their data and their findings, and one of the things that Ryan and I have observed in almost all of the disturbances is, there's not a lot of analysis that seems to take place before Ryan and I show up and start asking questions. 

And I think, as an industry, if we're going to improve, I think that we just really need to encourage everybody that when your facility reduces output or if you have inverter tripping or things like that, I think that before NERC shows up or before Ryan and I show up and start asking questions, I think it would just be very advantageous if the industry would take that analysis on themselves. Immediately, if facilities trip or reduce output, analyze that, go do the analysis and determine, did it perform as we expected? Or was there some abnormalities? And I think that if every time that happened, then we might remedy a lot of these problems before we have another San Fernando type disturbance. That's just observations that Ryan and I have made in our ongoing saga of analyzing these five events, so. Do you have anything to maybe add to that, Ryan?

Ryan Quint: No, I think that, that's really the main point there, that industry practice analysis of these events is really critical here. And then there's a benefit for the industry, whether it's the generator owner, generator operator, transmission planner, et cetera to be analyzing these things because of the abnormality and the potential risk to reliability there. There's benefits to the generator owner for that plant to stay online producing megawatts, producing more energy, right? So having the generator owner aware of these events and analyzing them more practically is beneficial. The transmission planner, transmission owner knowing that these events are happening can help improve the models because it may be able to do some model analysis, model verification, model validation following these events and we want to see more of that and I think we need to start seeing more of that as an industry.

Dave Angell: So now we, I would say, get into the meat of the event itself and the inverter tripping. So there's some new key findings with this particular event and I guess, Ryan, you want to talk about that a little bit more with us?

Ryan Quint: Sure, yeah, and I covered this just very briefly, but maybe I'll spend just a moment here going into a little bit more detail. So in the San Fernando disturbance, there were really three causes of tripping, over solar PV resources. Now, if you read the report, tripping was not a significantly large amount of the reduction of solar PV output. A lot of that was attributed to momentary cessation and the delayed recovery of resources following that, as well as the delayed recovery of some even in current injection mode, which we'll talk about here in a minute.

But in terms of the tripping piece, we're always wanting to be aware of, what are the causes of tripping and how could we potentially mitigate that. Like I mentioned with the Blue Cut Fire, we tried to work with the manufacturers, they're deeply involved with our IRPWG group, and so we communicate these with them and they go back and we've heard anecdotally from a number of manufacturers that they go back, they will ship off the findings to their folks that work in white lab coats and do testing for a living and make sure that the inverters can be built as robustly as possible to try to improve performance and not be the cause of these types of events. And so that's a great circular relationship we have with them that's beneficial to everybody.

But in this event, really what we saw were three forms of tripping; AC over-current, DC low voltage and AC low voltage. Now, not to get incredibly technical, I want to keep it at a high, easily digestible level for us here, but so what we hear regularly is that inverters are current-limited devices, they're current sources and they're current-limited devices and they can only produce a certain amount of current and that's drastically lower than a synchronous machine on the order of 1.1 to 1.2 per unit, and we haven't really seen much AC over-current tripping in the past. And so, that makes you scratch your head going, "Okay, you're telling me that an inverter is a current-limited device, the deep inner controls of the power electronics tightly control the current so we don't fry the power electronic switches, and then you tell me that we trip on AC over-current." So it makes you scratch our head a little bit going, "That doesn't necessarily line up."

So what we have learned, and it's a little bit speculative, but we've talked to number of folks, that the AC over-current protection and the DC low voltage protection, which were at a number of the same facilities and the same particular inverter manufacturer, were related in some way, shape or form, and we believe that it's related to the way that, that manufacturer controls this switching of the IGBTs within the inverter to these really large, relatively large fault disturbances that cause a large perturbation in voltage, phase angle at the inverter terminals, et cetera, and so we think it has something to do with that and we talked to a number of inverter manufacturers and they said, "Yeah, we struggle with this because when you get an instantaneous change in your terminal voltage and terminal angle, you have to do some tricky things to not have a large spike in inverter current and we manage that." So they told us that.

On that last form of tripping, we thought particularly with one other manufacturer at just a couple facilities, and really it's just AC low voltage protection. They have a specific setting in their inverter that says, "If voltage falls below this threshold for some period of time, then trip," that was within the voltage, right through curve of PRC 24, but again, without the high resolution data for voltage, we can't really tell  exactly what happened, so all we can really say is, "Well, we know that, that fault cleared in two and a half cycles. That's nothing even remotely close to the low voltage threshold setting in PRC 24, so we're not sure exactly what happened, but that one seems a little abnormal." 

So then going to the recommendation, at most of the facilities, at, if not, all the facilities, we're seeing partial tripping. So it's very rare that we see an entire plant trip on some form of tripping. Now, we may see an entire plant go into momentary cessation, but it's very unlikely that we see an entire plant trip or particularly, these abnormally cleared faults. So, again, going back to some of the discussions we had in IRPWG, our folks tripping is still tripping and really, we want to avoid that at all possible, particularly with the relation to the NERC Standards and other potential inter-connection requirements that maybe more, the local transmission service provider has in place. But we are seeing pretty regularly partial tripping events.

The other thing that we noticed is that in a lot of these events, the things that are causing tripping are unrelated to voltage and frequency protective relaying, which is what the PRC 24 Standard is, right? So the PRC 24 Standard is not a ride through standard, it is a voltage and frequency protective relaying standard, whether that's a relay device or the controls within the inverter, but at the end of the day when we talk about DC reverse current, AC over-current, PLL loss of synchronism, those kind of things that are causing tripping, there is no standard that says that you cannot trip for that type of performance, unless it's in a local interconnection requirement. And so, really that last recommendation in the report is, it's ultimately on the TO, the TP, the planning coordinator, [inaudible 00:33:58] operator, RC to establish clear interconnection requirements and then make sure that the models that are being submitted for these facilities account for these types of performance, and then in the studies, we're able to identify potential tripping issues.

A lot of this stuff, when we talk about PLL loss of synchronism, DC reversed current, those types of issues are not accounted for in the conventional dynamic models that a planner uses in an interconnection wide case. Those would be only captured in maybe a electromagnetic transient EMT model, and that goes back to our recommendations and our guidelines that we recommend all transmission planners, planning coordinators require EMT models for all newly interconnecting inverter based resources because we see these potentially systemic tripping type issues and other types of behavior for unbalanced, good condition, low short circuit strength condition, where an EMT model is absolutely needed to perform a study and identify any potential issues. Without it, the fundamental frequency positive sequence models are falling short in that regard because they weren't designed to address those types of things in the first place. I think I'll leave it at that.

Dave Angell: Yeah, you definitely went deep on that one, so. Let's talk a little bit about the dynamic behavior of the solar plants during the fault, and part of this looks to be about actually then returning post-disturbance too, right?

Rich Bauer: One of the issues that we've seen, and this one is, I guess maybe a term you could use for it would be a chronic issue, but anyway, we've seen this one repeat itself in practically every disturbance that we've had, and it's this plant level control interaction. And so, as many people are probably aware, in both of the NERC alerts that we've put out, in both of those NERC alerts, we talk about momentary cessation and we talk about the need that, if you utilize momentary cessation and you can't eliminate it, that returning to your pre-disturbance output level very quickly is critical. And in the last alert that we had, the last alert that we put out, we specified that we would like to restore output within one second. 

And what we see in a number of instances, is we see that the inverters will go into momentary cessation and then they'll come out of momentary cessation and they're restoring their output very quickly. But then what happens is, they get 30% of their pre-disturbance output level restored, and then they slow way down and it might take a minute or two or up to five minutes, and you see this slow ramp back to their pre-disturbance output level. And in almost every instance, what we identify is that the plant level controller is limiting how quickly the inverters can come back. 

And so, as we pointed out in the alerts and as I mentioned, is, it's real critical that we restore the output fairly quickly if we have to utilize momentary cessation. So that's one of the identified characteristics that we have highlighted in this report again that we need to look into and plants need to try to restore that output very quickly.

And then, very, very similar to that, we also have seen, especially in this latest, the San Fernando Disturbance, we've seen a number of instances where plants, maybe they actually aren't even going into momentary cessation, so the fault occurs, the inverters respond to that drop in voltage by increasing their reactive output, their war output. And then, in some instances, they sacrifice some real power output or some watt output for their war output. And so they'll actually reduce their megawatt output to increase their megavar output. Well then, once the fault clears, we're seeing that same type of interaction with the plant level control, where returning back to their megawatt value and reducing their var output, they start to do that fairly quickly and then we see a really slow ramp back. 

So we've seen more than one type of scenario where the plant level controller is limiting how quickly they come back to normal operation after the disturbance and, once again, that presents some reliability risk to the system. So we want to return back to our pre-disturbance values fairly quickly upon fault clearing and the plant level control seems to impede that in a number of instances. So that's one of the observations that we've made with regards to dynamic behavior, so. Do you have some others, Ry, that you want to talk about?

Ryan Quint: I think the last part there is the final recommendation in the report, which is that, again, everything comes back to having clear interconnection requirements. And at the most fundamental level, the way that a synchronous machine responds to a great event is dictated by physics, and we have controllers like Excitation systems and governors that try to manage the post-fault recovery in a stable way to bring that resource back to a new operating condition. Inverters are a different animal. They're everything that controls the way they behave is driven by the power electronics that are programmed right into those inverter controls, plat level controls, et cetera. And so, not having clear requirements was an old luxury that we had. Now, we really have to specify exactly how we want resources to behave. 

And the manufacturers ... a lot of the requirements that would go in place are not all that stringent, right? It really just provides clarity to the developer, generator owner operator and the equipment manufacturer on, "Tell me how you want me to set my devices going out there because if you don't tell me, I'm going to pick something." So a lot of these resources have the proportional control or whatever they call it, K factor setting in their inverter and that's to dictate, "For a certain voltage drop, how much reactive do you want me to inject?" That K setting, K factor setting is a critical thing for system stability moving forward. 

But then also, having these, again, EMT models that show the interaction between the inverters and the plant level controller, and not having a delayed response and all that stuff, we really got to make sure that, that stuff gets flushed out before the time of interconnection, before the commercial operation date so that when the resource comes online, it's going to be behaving properly and the way we want it to be behaving.

Dave Angell: Indeed. So one other thing I noticed in here is on setting changes and a couple of the solar PV facility representatives say that they had made changes to the equipment settings and a performance to improve their dynamic response for fault incident. So what was the outcome of that?

Ryan Quint: We always like to pepper our disturbance reports with a little bit of positivity. And this was a bit of a success story, right? So we had ... a good example of this is, we had one solar PV facility, fairly large one that entered momentary cessation, and that resource had come online about a year ago, so it's not an old facility, it's a new facility. And we informed the facility that they had entered momentary cessation and we had questions on what the settings were and recommended that, "Hey, we noticed you got some nice, fancy, new inverters that are out there in the field that we know can be set differently." Before we knew it, they had informed us that, "Oh my gosh, we didn't know that we had that setting on. We went and changed that setting. We're now providing a K factor dynamic voltage control during disturbance ride through events and you won't see momentary cessation from us again." And so that was great. 

So we could call that a success story and I think what made us feel really good is, is that generator owner operator also informed us, "Oh, yeah, we're very aware of the NERC reliability guidelines, we've read them, we know we're not supposed to be doing that, this slipped by us," and they were aware of the guidance that has been provided through the NERC reliability guidelines. But that's still good, at least, that some GOs, some of the larger GOs of what the recommendation is and they are being diligent about making changes.

Dave Angell: Very good. Well, I just want to thank the two of you for providing a lot of insight and detail that really helps us understand this disturbance report and the recommendation. A lot of folks may never have heard of the San Fernando Valley before, but the San Fernando Valley was actually made fairly famous back in the 1980s and it was because of the way the young adolescent girls from the San Fernando Valley talked and some of their mannerisms. One of the ways that they would talk, is they would say, "Like, whatever," and I guess what you're really looking for out of this whole program here, is for folks that have solar plants that seem to be performing maybe a little bit off to, rather than say, "Like whatever," actually dig in and analyze it and find out what's working improperly. Did I capture that correctly?

Rich Bauer: Absolutely.

Ryan Quint: It's a good way to put it. We do have to thank the San Fernando Valley, we do have to thank them for the rare event of a bolted three phase fault because in the planning realm, that's always studied and we often hear that, "Oh, we never have bolted three phase fault," so now we can say, "Well, no, actually remember that time in San Fernando Valley we did and we had 50 solar PV plants abnormally respond to that event?" Now we have something collectively as an industry hold up against that claim that we never have bolted three phase faults.

Dave Angell: The other thing on that, and Rich could attest to this is, the bolted three phase faults usually come about in retest energization. Right, Rich?

Rich Bauer: Exactly. Exactly. That's ... I think there's two cases, right, Dave? Is test reenergization or forgetting to remove the grounds, right?

Dave Angell: Oh, yeah, that would be the other one.

Rich Bauer: Yeah.

Dave Angell: Indeed. Okay. Well, gentlemen, thank you for your time today. Really appreciate all these insights that you provided.